id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2203.05659
|
Benjamin Horne
|
Maur\'icio Gruppi, Benjamin D. Horne, Sibel Adal{\i}
|
NELA-GT-2022: A Large Multi-Labelled News Dataset for The Study of
Misinformation in News Articles
|
Technical report documenting the NELA-GT recent update
(NELA-GT-2022). arXiv admin note: substantial text overlap with
arXiv:2102.04567
| null | null | null |
cs.CL cs.CY cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present the fifth installment of the NELA-GT datasets,
NELA-GT-2022. The dataset contains 1,778,361 articles from 361 outlets between
January 1st, 2022 and December 31st, 2022. Just as in past releases of the
dataset, NELA-GT-2022 includes outlet-level veracity labels from Media
Bias/Fact Check and tweets embedded in collected news articles. The
NELA-GT-2022 dataset can be found at: https://doi.org/10.7910/DVN/AMCV2H
|
[
{
"version": "v1",
"created": "Thu, 10 Mar 2022 21:58:33 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Mar 2023 22:21:50 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Gruppi",
"Maurício",
""
],
[
"Horne",
"Benjamin D.",
""
],
[
"Adalı",
"Sibel",
""
]
] |
new_dataset
| 0.999901 |
2203.11854
|
Jakob Hoydis
|
Jakob Hoydis, Sebastian Cammerer, Fay\c{c}al Ait Aoudia, Avinash Vem,
Nikolaus Binder, Guillermo Marcus, Alexander Keller
|
Sionna: An Open-Source Library for Next-Generation Physical Layer
Research
|
5 pages, 1 figure, 4 code listings
| null | null | null |
cs.IT cs.AI cs.LG math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Sionna is a GPU-accelerated open-source library for link-level simulations
based on TensorFlow. It enables the rapid prototyping of complex communication
system architectures and provides native support for the integration of neural
networks. Sionna implements a wide breadth of carefully tested state-of-the-art
algorithms that can be used for benchmarking and end-to-end performance
evaluation. This allows researchers to focus on their research, making it more
impactful and reproducible, while saving time implementing components outside
their area of expertise. This white paper provides a brief introduction to
Sionna, explains its design principles and features, as well as future
extensions, such as integrated ray tracing and custom CUDA kernels. We believe
that Sionna is a valuable tool for research on next-generation communication
systems, such as 6G, and we welcome contributions from our community.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 16:31:44 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Mar 2023 13:51:38 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Hoydis",
"Jakob",
""
],
[
"Cammerer",
"Sebastian",
""
],
[
"Aoudia",
"Fayçal Ait",
""
],
[
"Vem",
"Avinash",
""
],
[
"Binder",
"Nikolaus",
""
],
[
"Marcus",
"Guillermo",
""
],
[
"Keller",
"Alexander",
""
]
] |
new_dataset
| 0.99933 |
2206.00792
|
Jun Muramatsu
|
Jun Muramatsu
|
Channel Codes for Relayless Networks with General Message Access
Structure
|
(v1) 26 pages, to submitted to IEEE ITW2023, (v2) 27 pages, Remark 1
and Lemma 9 in v1 is deleted, Lemma 7 in v2 is added, Eq. (13) and the proof
of Lemma 7 in v1 (Eq. (14) and the proof of Lemma 8 in v2) are revised
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Channel codes for relayless networks with the general message access
structure is introduced. It is shown that the multi-letter characterized
capacity region of this network is achievable with this code. The capacity
region is characterized in terms of entropy functions and provides an
alternative to the regions introduced by [Somekh-Baruch and Verd\'u,
ISIT2006][Muramatsu and Miyake, ISITA2018].
|
[
{
"version": "v1",
"created": "Wed, 1 Jun 2022 22:56:06 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Mar 2023 02:42:53 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Muramatsu",
"Jun",
""
]
] |
new_dataset
| 0.999723 |
2206.04803
|
Mirko Zichichi
|
Nadia Pocher, Mirko Zichichi, Fabio Merizzi, Muhammad Zohaib Shafiq
and Stefano Ferretti
|
Detecting Anomalous Cryptocurrency Transactions: an AML/CFT Application
of Machine Learning-based Forensics
| null | null | null | null |
cs.CR cs.DC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In shaping the Internet of Money, the application of blockchain and
distributed ledger technologies (DLTs) to the financial sector triggered
regulatory concerns. Notably, while the user anonymity enabled in this field
may safeguard privacy and data protection, the lack of identifiability hinders
accountability and challenges the fight against money laundering and the
financing of terrorism and proliferation (AML/CFT). As law enforcement agencies
and the private sector apply forensics to track crypto transfers across
ecosystems that are socio-technical in nature, this paper focuses on the
growing relevance of these techniques in a domain where their deployment
impacts the traits and evolution of the sphere. In particular, this work offers
contextualized insights into the application of methods of machine learning and
transaction graph analysis. Namely, it analyzes a real-world dataset of Bitcoin
transactions represented as a directed graph network through various
techniques. The modeling of blockchain transactions as a complex network
suggests that the use of graph-based data analysis methods can help classify
transactions and identify illicit ones. Indeed, this work shows that the neural
network types known as Graph Convolutional Networks (GCN) and Graph Attention
Networks (GAT) are a promising AML/CFT solution. Notably, in this scenario GCN
outperform other classic approaches and GAT are applied for the first time to
detect anomalies in Bitcoin. Ultimately, the paper upholds the value of
public-private synergies to devise forensic strategies conscious of the spirit
of explainability and data openness.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 16:22:55 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Oct 2022 14:20:55 GMT"
},
{
"version": "v3",
"created": "Sat, 18 Mar 2023 14:41:31 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Pocher",
"Nadia",
""
],
[
"Zichichi",
"Mirko",
""
],
[
"Merizzi",
"Fabio",
""
],
[
"Shafiq",
"Muhammad Zohaib",
""
],
[
"Ferretti",
"Stefano",
""
]
] |
new_dataset
| 0.984302 |
2208.12900
|
Jie Zhou
|
Jie Zhou, John Criswell, Michael Hicks
|
Fat Pointers for Temporal Memory Safety of C
| null | null |
10.1145/3586038
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal memory safety bugs, especially use-after-free and double free bugs,
pose a major security threat to C programs. Real-world exploits utilizing these
bugs enable attackers to read and write arbitrary memory locations, causing
disastrous violations of confidentiality, integrity, and availability. Many
previous solutions retrofit temporal memory safety to C, but they all either
incur high performance overhead and/or miss detecting certain types of temporal
memory safety bugs.
In this paper, we propose a temporal memory safety solution that is both
efficient and comprehensive. Specifically, we extend Checked C, a
spatially-safe extension to C, with temporally-safe pointers. These are
implemented by combining two techniques: fat pointers and dynamic key-lock
checks. We show that the fat-pointer solution significantly improves running
time and memory overhead compared to the disjoint-metadata approach that
provides the same level of protection. With empirical program data and hands-on
experience porting real-world applications, we also show that our solution is
practical in terms of backward compatibility -- one of the major complaints
about fat pointers.
|
[
{
"version": "v1",
"created": "Sat, 27 Aug 2022 00:39:27 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Mar 2023 03:31:10 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Zhou",
"Jie",
""
],
[
"Criswell",
"John",
""
],
[
"Hicks",
"Michael",
""
]
] |
new_dataset
| 0.998572 |
2209.09659
|
Thorbj{\o}rn Mosekj{\ae}r Iversen
|
Thorbj{\o}rn Mosekj{\ae}r Iversen, Rasmus Laurvig Haugaard, Anders
Glent Buch
|
Ki-Pode: Keypoint-based Implicit Pose Distribution Estimation of Rigid
Objects
|
11 pages, 2 figures
|
The 33rd British Machine Vision Conference Proceedings: BMVC 2022
| null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The estimation of 6D poses of rigid objects is a fundamental problem in
computer vision. Traditionally pose estimation is concerned with the
determination of a single best estimate. However, a single estimate is unable
to express visual ambiguity, which in many cases is unavoidable due to object
symmetries or occlusion of identifying features. Inability to account for
ambiguities in pose can lead to failure in subsequent methods, which is
unacceptable when the cost of failure is high. Estimates of full pose
distributions are, contrary to single estimates, well suited for expressing
uncertainty on pose. Motivated by this, we propose a novel pose distribution
estimation method. An implicit formulation of the probability distribution over
object pose is derived from an intermediary representation of an object as a
set of keypoints. This ensures that the pose distribution estimates have a high
level of interpretability. Furthermore, our method is based on conservative
approximations, which leads to reliable estimates. The method has been
evaluated on the task of rotation distribution estimation on the YCB-V and
T-LESS datasets and performs reliably on all objects.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 11:59:05 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Iversen",
"Thorbjørn Mosekjær",
""
],
[
"Haugaard",
"Rasmus Laurvig",
""
],
[
"Buch",
"Anders Glent",
""
]
] |
new_dataset
| 0.982227 |
2210.01954
|
Travis Gagie
|
Xing Lyu, Travis Gagie and Meng He
|
Rectangular Ruler Wrapping
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We define {\sc Rectangular Ruler Wrapping} as a natural variant of the {\sc
Ruler Wrapping} problem proposed by O'Rourke at CCCG '21, and give a simple,
online and quadratic-time algorithm for it, under the simplifying assumption
that the last segment must extend strictly beyond every other in the relevant
direction.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 23:00:34 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Dec 2022 17:58:25 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Dec 2022 21:38:00 GMT"
},
{
"version": "v4",
"created": "Fri, 17 Mar 2023 19:15:54 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Lyu",
"Xing",
""
],
[
"Gagie",
"Travis",
""
],
[
"He",
"Meng",
""
]
] |
new_dataset
| 0.997194 |
2210.07316
|
Niklas Muennighoff
|
Niklas Muennighoff, Nouamane Tazi, Lo\"ic Magne, Nils Reimers
|
MTEB: Massive Text Embedding Benchmark
|
24 pages, 14 tables, 6 figures
| null | null | null |
cs.CL cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text embeddings are commonly evaluated on a small set of datasets from a
single task not covering their possible applications to other tasks. It is
unclear whether state-of-the-art embeddings on semantic textual similarity
(STS) can be equally well applied to other tasks like clustering or reranking.
This makes progress in the field difficult to track, as various models are
constantly being proposed without proper evaluation. To solve this problem, we
introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding
tasks covering a total of 58 datasets and 112 languages. Through the
benchmarking of 33 models on MTEB, we establish the most comprehensive
benchmark of text embeddings to date. We find that no particular text embedding
method dominates across all tasks. This suggests that the field has yet to
converge on a universal text embedding method and scale it up sufficiently to
provide state-of-the-art results on all embedding tasks. MTEB comes with
open-source code and a public leaderboard at
https://github.com/embeddings-benchmark/mteb.
|
[
{
"version": "v1",
"created": "Thu, 13 Oct 2022 19:42:08 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Feb 2023 15:59:49 GMT"
},
{
"version": "v3",
"created": "Sun, 19 Mar 2023 13:37:01 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Muennighoff",
"Niklas",
""
],
[
"Tazi",
"Nouamane",
""
],
[
"Magne",
"Loïc",
""
],
[
"Reimers",
"Nils",
""
]
] |
new_dataset
| 0.953187 |
2210.12036
|
Bastien Rivier
|
Guilherme D. da Fonseca and Yan Gerard and Bastien Rivier
|
On the Longest Flip Sequence to Untangle Segments in the Plane
|
9 pages, 4 figures, appears in Walcom'23
| null | null | null |
cs.CG cs.CC
|
http://creativecommons.org/licenses/by/4.0/
|
A set of segments in the plane may form a Euclidean TSP tour or a matching,
among others. Optimal TSP tours as well as minimum weight perfect matchings
have no crossing segments, but several heuristics and approximation algorithms
may produce solutions with crossings. To improve such solutions, we can
successively apply a flip operation that replaces a pair of crossing segments
by non-crossing ones. This paper considers the maximum number D(n) of flips
performed on n segments. First, we present reductions relating D(n) for
different sets of segments (TSP tours, monochromatic matchings, red-blue
matchings, and multigraphs). Second, we show that if all except t points are in
convex position, then D(n) = O(tn^2), providing a smooth transition between the
convex O(n^2) bound and the general O(n^3) bound. Last, we show that if instead
of counting the total number of flips, we only count the number of distinct
flips, then the cubic upper bound improves to O(n^{8/3}).
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 15:29:03 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Dec 2022 16:12:21 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Mar 2023 19:37:22 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"da Fonseca",
"Guilherme D.",
""
],
[
"Gerard",
"Yan",
""
],
[
"Rivier",
"Bastien",
""
]
] |
new_dataset
| 0.993123 |
2210.17217
|
Lawrence Yunliang Chen
|
Lawrence Yunliang Chen, Baiyu Shi, Daniel Seita, Richard Cheng, Thomas
Kollar, David Held, Ken Goldberg
|
AutoBag: Learning to Open Plastic Bags and Insert Objects
|
ICRA 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Thin plastic bags are ubiquitous in retail stores, healthcare, food handling,
recycling, homes, and school lunchrooms. They are challenging both for
perception (due to specularities and occlusions) and for manipulation (due to
the dynamics of their 3D deformable structure). We formulate the task of
"bagging:" manipulating common plastic shopping bags with two handles from an
unstructured initial state to an open state where at least one solid object can
be inserted into the bag and lifted for transport. We propose a self-supervised
learning framework where a dual-arm robot learns to recognize the handles and
rim of plastic bags using UV-fluorescent markings; at execution time, the robot
does not use UV markings or UV light. We propose the AutoBag algorithm, where
the robot uses the learned perception model to open a plastic bag through
iterative manipulation. We present novel metrics to evaluate the quality of a
bag state and new motion primitives for reorienting and opening bags based on
visual observations. In physical experiments, a YuMi robot using AutoBag is
able to open bags and achieve a success rate of 16/30 for inserting at least
one item across a variety of initial bag configurations. Supplementary material
is available at https://sites.google.com/view/autobag.
|
[
{
"version": "v1",
"created": "Mon, 31 Oct 2022 10:57:10 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Mar 2023 06:26:38 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Chen",
"Lawrence Yunliang",
""
],
[
"Shi",
"Baiyu",
""
],
[
"Seita",
"Daniel",
""
],
[
"Cheng",
"Richard",
""
],
[
"Kollar",
"Thomas",
""
],
[
"Held",
"David",
""
],
[
"Goldberg",
"Ken",
""
]
] |
new_dataset
| 0.997559 |
2211.05405
|
Nghia Hieu Nguyen
|
Nghia Hieu Nguyen, Duong T.D. Vo, Minh-Quan Ha
|
VieCap4H-VLSP 2021: ObjectAoA-Enhancing performance of Object Relation
Transformer with Attention on Attention for Vietnamese image captioning
|
Accepted for publishing at the VNU Journal of Science: Computer
Science and Communication Engineering
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Image captioning is currently a challenging task that requires the ability to
both understand visual information and use human language to describe this
visual information in the image. In this paper, we propose an efficient way to
improve the image understanding ability of transformer-based method by
extending Object Relation Transformer architecture with Attention on Attention
mechanism. Experiments on the VieCap4H dataset show that our proposed method
significantly outperforms its original structure on both the public test and
private test of the Image Captioning shared task held by VLSP.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 08:19:44 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Nov 2022 08:10:13 GMT"
},
{
"version": "v3",
"created": "Mon, 27 Feb 2023 03:35:55 GMT"
},
{
"version": "v4",
"created": "Mon, 20 Mar 2023 08:29:29 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Nguyen",
"Nghia Hieu",
""
],
[
"Vo",
"Duong T. D.",
""
],
[
"Ha",
"Minh-Quan",
""
]
] |
new_dataset
| 0.994947 |
2211.10307
|
Kostas Papafitsoros
|
Kostas Papafitsoros, Luk\'a\v{s} Adam, Vojt\v{e}ch \v{C}erm\'ak,
Luk\'a\v{s} Picek
|
SeaTurtleID: A novel long-span dataset highlighting the importance of
timestamps in wildlife re-identification
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper introduces SeaTurtleID, the first public large-scale, long-span
dataset with sea turtle photographs captured in the wild. The dataset is
suitable for benchmarking re-identification methods and evaluating several
other computer vision tasks. The dataset consists of 7774 high-resolution
photographs of 400 unique individuals collected within 12 years in 1081
encounters. Each photograph is accompanied by rich metadata, e.g., identity
label, head segmentation mask, and encounter timestamp. The 12-year span of the
dataset makes it the longest-spanned public wild animal dataset with
timestamps. By exploiting this unique property, we show that timestamps are
necessary for an unbiased evaluation of animal re-identification methods
because they allow time-aware splits of the dataset into reference and query
sets. We show that time-unaware (random) splits can lead to performance
overestimation of more than 100% compared to the time-aware splits for both
feature- and CNN-based re-identification methods. We also argue that time-aware
splits correspond to more realistic re-identification pipelines than the
time-unaware ones. We recommend that animal re-identification methods should
only be tested on datasets with timestamps using time-aware splits, and we
encourage dataset curators to include such information in the associated
metadata.
|
[
{
"version": "v1",
"created": "Fri, 18 Nov 2022 15:46:24 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Mar 2023 11:30:49 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Papafitsoros",
"Kostas",
""
],
[
"Adam",
"Lukáš",
""
],
[
"Čermák",
"Vojtěch",
""
],
[
"Picek",
"Lukáš",
""
]
] |
new_dataset
| 0.999848 |
2211.11432
|
Muhammad Jehanzeb Mirza
|
M. Jehanzeb Mirza, Inkyu Shin, Wei Lin, Andreas Schriebl, Kunyang Sun,
Jaesung Choe, Horst Possegger, Mateusz Kozinski, In So Kweon, Kun-Jin Yoon,
Horst Bischof
|
MATE: Masked Autoencoders are Online 3D Test-Time Learners
|
Code is available at this repository:
https://github.com/jmiemirza/MATE
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Our MATE is the first Test-Time-Training (TTT) method designed for 3D data,
which makes deep networks trained for point cloud classification robust to
distribution shifts occurring in test data. Like existing TTT methods from the
2D image domain, MATE also leverages test data for adaptation. Its test-time
objective is that of a Masked Autoencoder: a large portion of each test point
cloud is removed before it is fed to the network, tasked with reconstructing
the full point cloud. Once the network is updated, it is used to classify the
point cloud. We test MATE on several 3D object classification datasets and show
that it significantly improves robustness of deep networks to several types of
corruptions commonly occurring in 3D point clouds. We show that MATE is very
efficient in terms of the fraction of points it needs for the adaptation. It
can effectively adapt given as few as 5% of tokens of each test sample, making
it extremely lightweight. Our experiments show that MATE also achieves
competitive performance by adapting sparsely on the test data, which further
reduces its computational overhead, making it ideal for real-time applications.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 13:19:08 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Nov 2022 10:52:59 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Mar 2023 09:44:58 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Mirza",
"M. Jehanzeb",
""
],
[
"Shin",
"Inkyu",
""
],
[
"Lin",
"Wei",
""
],
[
"Schriebl",
"Andreas",
""
],
[
"Sun",
"Kunyang",
""
],
[
"Choe",
"Jaesung",
""
],
[
"Possegger",
"Horst",
""
],
[
"Kozinski",
"Mateusz",
""
],
[
"Kweon",
"In So",
""
],
[
"Yoon",
"Kun-Jin",
""
],
[
"Bischof",
"Horst",
""
]
] |
new_dataset
| 0.999502 |
2211.11436
|
Haram Choi
|
Haram Choi, Jeongmin Lee and Jihoon Yang
|
N-Gram in Swin Transformers for Efficient Lightweight Image
Super-Resolution
|
CVPR 2023 camera-ready. Codes are available at
https://github.com/rami0205/NGramSwin
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
While some studies have proven that Swin Transformer (Swin) with window
self-attention (WSA) is suitable for single image super-resolution (SR), the
plain WSA ignores the broad regions when reconstructing high-resolution images
due to a limited receptive field. In addition, many deep learning SR methods
suffer from intensive computations. To address these problems, we introduce the
N-Gram context to the low-level vision with Transformers for the first time. We
define N-Gram as neighboring local windows in Swin, which differs from text
analysis that views N-Gram as consecutive characters or words. N-Grams interact
with each other by sliding-WSA, expanding the regions seen to restore degraded
pixels. Using the N-Gram context, we propose NGswin, an efficient SR network
with SCDP bottleneck taking multi-scale outputs of the hierarchical encoder.
Experimental results show that NGswin achieves competitive performance while
maintaining an efficient structure when compared with previous leading methods.
Moreover, we also improve other Swin-based SR methods with the N-Gram context,
thereby building an enhanced model: SwinIR-NG. Our improved SwinIR-NG
outperforms the current best lightweight SR approaches and establishes
state-of-the-art results. Codes are available at
https://github.com/rami0205/NGramSwin.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 13:23:52 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 12:56:46 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Mar 2023 12:48:37 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Choi",
"Haram",
""
],
[
"Lee",
"Jeongmin",
""
],
[
"Yang",
"Jihoon",
""
]
] |
new_dataset
| 0.997813 |
2211.11674
|
Dario Pavllo
|
Dario Pavllo, David Joseph Tan, Marie-Julie Rakotosaona, Federico
Tombari
|
Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion
|
CVPR 2023. Code and models are available at
https://github.com/google-research/nerf-from-image
| null | null | null |
cs.CV cs.AI cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural Radiance Fields (NeRF) coupled with GANs represent a promising
direction in the area of 3D reconstruction from a single view, owing to their
ability to efficiently model arbitrary topologies. Recent work in this area,
however, has mostly focused on synthetic datasets where exact ground-truth
poses are known, and has overlooked pose estimation, which is important for
certain downstream applications such as augmented reality (AR) and robotics. We
introduce a principled end-to-end reconstruction framework for natural images,
where accurate ground-truth poses are not available. Our approach recovers an
SDF-parameterized 3D shape, pose, and appearance from a single image of an
object, without exploiting multiple views during training. More specifically,
we leverage an unconditional 3D-aware generator, to which we apply a hybrid
inversion scheme where a model produces a first guess of the solution which is
then refined via optimization. Our framework can de-render an image in as few
as 10 steps, enabling its use in practical scenarios. We demonstrate
state-of-the-art results on a variety of real and synthetic benchmarks.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 17:42:42 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Mar 2023 11:33:18 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Pavllo",
"Dario",
""
],
[
"Tan",
"David Joseph",
""
],
[
"Rakotosaona",
"Marie-Julie",
""
],
[
"Tombari",
"Federico",
""
]
] |
new_dataset
| 0.995547 |
2212.02870
|
Ayush Tripathi
|
Ayush Tripathi, Prathosh AP, Suriya Prakash Muthukrishnan, Lalan Kumar
|
TripCEAiR: A Multi-Loss minimization approach for surface EMG based
Airwriting Recognition
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Airwriting Recognition refers to the problem of identification of letters
written in space with movement of the finger. It can be seen as a special case
of dynamic gesture recognition wherein the set of gestures are letters in a
particular language. Surface Electromyography (sEMG) is a non-invasive approach
used to capture electrical signals generated as a result of contraction and
relaxation of the muscles. sEMG has been widely adopted for gesture recognition
applications. Unlike static gestures, dynamic gestures are user-friendly and
can be used as a method for input with applications in Human Computer
Interaction. There has been limited work in recognition of dynamic gestures
such as airwriting, using sEMG signals and forms the core of the current work.
In this work, a multi-loss minimization framework for sEMG based airwriting
recognition is proposed. The proposed framework aims at learning a feature
embedding vector that minimizes the triplet loss, while simultaneously learning
the parameters of a classifier head to recognize corresponding alphabets. The
proposed method is validated on a dataset recorded in the lab comprising of
sEMG signals from 50 participants writing English uppercase alphabets. The
effect of different variations of triplet loss, triplet mining strategies and
feature embedding dimension is also presented. The best-achieved accuracy was
81.26% and 65.62% in user-dependent and independent scenarios respectively by
using semihard positive and hard negative triplet mining. The code for our
implementation will be made available at https://github.com/ayushayt/TripCEAiR.
|
[
{
"version": "v1",
"created": "Tue, 6 Dec 2022 10:20:19 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Jan 2023 17:03:38 GMT"
},
{
"version": "v3",
"created": "Sun, 19 Mar 2023 16:39:38 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Tripathi",
"Ayush",
""
],
[
"AP",
"Prathosh",
""
],
[
"Muthukrishnan",
"Suriya Prakash",
""
],
[
"Kumar",
"Lalan",
""
]
] |
new_dataset
| 0.99556 |
2301.06051
|
Haiyang Wang
|
Haiyang Wang, Chen Shi, Shaoshuai Shi, Meng Lei, Sen Wang, Di He,
Bernt Schiele, Liwei Wang
|
DSVT: Dynamic Sparse Voxel Transformer with Rotated Sets
|
Accepted by CVPR2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Designing an efficient yet deployment-friendly 3D backbone to handle sparse
point clouds is a fundamental problem in 3D perception. Compared with the
customized sparse convolution, the attention mechanism in Transformers is more
appropriate for flexibly modeling long-range relationships and is easier to be
deployed in real-world applications. However, due to the sparse characteristics
of point clouds, it is non-trivial to apply a standard transformer on sparse
points. In this paper, we present Dynamic Sparse Voxel Transformer (DSVT), a
single-stride window-based voxel Transformer backbone for outdoor 3D
perception. In order to efficiently process sparse points in parallel, we
propose Dynamic Sparse Window Attention, which partitions a series of local
regions in each window according to its sparsity and then computes the features
of all regions in a fully parallel manner. To allow the cross-set connection,
we design a rotated set partitioning strategy that alternates between two
partitioning configurations in consecutive self-attention layers. To support
effective downsampling and better encode geometric information, we also propose
an attention-style 3D pooling module on sparse points, which is powerful and
deployment-friendly without utilizing any customized CUDA operations. Our model
achieves state-of-the-art performance with a broad range of 3D perception
tasks. More importantly, DSVT can be easily deployed by TensorRT with real-time
inference speed (27Hz). Code will be available at
\url{https://github.com/Haiyang-W/DSVT}.
|
[
{
"version": "v1",
"created": "Sun, 15 Jan 2023 09:31:58 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Mar 2023 16:36:27 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Wang",
"Haiyang",
""
],
[
"Shi",
"Chen",
""
],
[
"Shi",
"Shaoshuai",
""
],
[
"Lei",
"Meng",
""
],
[
"Wang",
"Sen",
""
],
[
"He",
"Di",
""
],
[
"Schiele",
"Bernt",
""
],
[
"Wang",
"Liwei",
""
]
] |
new_dataset
| 0.962957 |
2302.08063
|
Raghav Goyal
|
Raghav Goyal, Effrosyni Mavroudi, Xitong Yang, Sainbayar Sukhbaatar,
Leonid Sigal, Matt Feiszli, Lorenzo Torresani, Du Tran
|
MINOTAUR: Multi-task Video Grounding From Multimodal Queries
|
22 pages, 8 figures and 13 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Video understanding tasks take many forms, from action detection to visual
query localization and spatio-temporal grounding of sentences. These tasks
differ in the type of inputs (only video, or video-query pair where query is an
image region or sentence) and outputs (temporal segments or spatio-temporal
tubes). However, at their core they require the same fundamental understanding
of the video, i.e., the actors and objects in it, their actions and
interactions. So far these tasks have been tackled in isolation with
individual, highly specialized architectures, which do not exploit the
interplay between tasks. In contrast, in this paper, we present a single,
unified model for tackling query-based video understanding in long-form videos.
In particular, our model can address all three tasks of the Ego4D Episodic
Memory benchmark which entail queries of three different forms: given an
egocentric video and a visual, textual or activity query, the goal is to
determine when and where the answer can be seen within the video. Our model
design is inspired by recent query-based approaches to spatio-temporal
grounding, and contains modality-specific query encoders and task-specific
sliding window inference that allow multi-task training with diverse input
modalities and different structured outputs. We exhaustively analyze
relationships among the tasks and illustrate that cross-task learning leads to
improved performance on each individual task, as well as the ability to
generalize to unseen tasks, such as zero-shot spatial localization of language
queries.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 04:00:03 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Mar 2023 20:46:33 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Goyal",
"Raghav",
""
],
[
"Mavroudi",
"Effrosyni",
""
],
[
"Yang",
"Xitong",
""
],
[
"Sukhbaatar",
"Sainbayar",
""
],
[
"Sigal",
"Leonid",
""
],
[
"Feiszli",
"Matt",
""
],
[
"Torresani",
"Lorenzo",
""
],
[
"Tran",
"Du",
""
]
] |
new_dataset
| 0.99911 |
2302.09330
|
Martin Gruber
|
Martin Gruber, Michael Heine, Norbert Oster, Michael Philippsen,
Gordon Fraser
|
Practical Flaky Test Prediction using Common Code Evolution and Test
History Data
|
12 pages, to be published in the Proceedings of the IEEE
International Conference on Software Testing, Verification and Validation
(ICST 2023)
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-deterministically behaving test cases cause developers to lose trust in
their regression test suites and to eventually ignore failures. Detecting flaky
tests is therefore a crucial task in maintaining code quality, as it builds the
necessary foundation for any form of systematic response to flakiness, such as
test quarantining or automated debugging. Previous research has proposed
various methods to detect flakiness, but when trying to deploy these in an
industrial context, their reliance on instrumentation, test reruns, or
language-specific artifacts was inhibitive. In this paper, we therefore
investigate the prediction of flaky tests without such requirements on the
underlying programming language, CI, build or test execution framework.
Instead, we rely only on the most commonly available artifacts, namely the
tests' outcomes and durations, as well as basic information about the code
evolution to build predictive models capable of detecting flakiness.
Furthermore, our approach does not require additional reruns, since it gathers
this data from existing test executions. We trained several established
classifiers on the suggested features and evaluated their performance on a
large-scale industrial software system, from which we collected a data set of
100 flaky and 100 non-flaky test- and code-histories. The best model was able
to achieve an F1-score of 95.5% using only 3 features: the tests' flip rates,
the number of changes to source files in the last 54 days, as well as the
number of changed files in the most recent pull request.
|
[
{
"version": "v1",
"created": "Sat, 18 Feb 2023 13:34:39 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Mar 2023 13:53:56 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Gruber",
"Martin",
""
],
[
"Heine",
"Michael",
""
],
[
"Oster",
"Norbert",
""
],
[
"Philippsen",
"Michael",
""
],
[
"Fraser",
"Gordon",
""
]
] |
new_dataset
| 0.992446 |
2302.09466
|
Yunlong Wang
|
Yunlong Wang, Shuyuan Shen, Brian Y. Lim
|
RePrompt: Automatic Prompt Editing to Refine AI-Generative Art Towards
Precise Expressions
|
To appear in Proceedings of the 2023 CHI Conference on Human Factors
in Computing Systems (CHI '23)
| null |
10.1145/3544548.3581402
| null |
cs.HC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generative AI models have shown impressive ability to produce images with
text prompts, which could benefit creativity in visual art creation and
self-expression. However, it is unclear how precisely the generated images
express contexts and emotions from the input texts. We explored the emotional
expressiveness of AI-generated images and developed RePrompt, an automatic
method to refine text prompts toward precise expression of the generated
images. Inspired by crowdsourced editing strategies, we curated intuitive text
features, such as the number and concreteness of nouns, and trained a proxy
model to analyze the feature effects on the AI-generated image. With model
explanations of the proxy model, we curated a rubric to adjust text prompts to
optimize image generation for precise emotion expression. We conducted
simulation and user studies, which showed that RePrompt significantly improves
the emotional expressiveness of AI-generated images, especially for negative
emotions.
|
[
{
"version": "v1",
"created": "Sun, 19 Feb 2023 03:31:31 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Feb 2023 03:16:51 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Mar 2023 02:34:00 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Wang",
"Yunlong",
""
],
[
"Shen",
"Shuyuan",
""
],
[
"Lim",
"Brian Y.",
""
]
] |
new_dataset
| 0.984535 |
2303.01498
|
Dimitrios Kollias
|
Dimitrios Kollias and Panagiotis Tzirakis and Alice Baird and Alan
Cowen and Stefanos Zafeiriou
|
ABAW: Valence-Arousal Estimation, Expression Recognition, Action Unit
Detection & Emotional Reaction Intensity Estimation Challenges
|
arXiv admin note: text overlap with arXiv:2202.10659
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The fifth Affective Behavior Analysis in-the-wild (ABAW) Competition is part
of the respective ABAW Workshop which will be held in conjunction with IEEE
Computer Vision and Pattern Recognition Conference (CVPR), 2023. The 5th ABAW
Competition is a continuation of the Competitions held at ECCV 2022, IEEE CVPR
2022, ICCV 2021, IEEE FG 2020 and CVPR 2017 Conferences, and is dedicated at
automatically analyzing affect. For this year's Competition, we feature two
corpora: i) an extended version of the Aff-Wild2 database and ii) the
Hume-Reaction dataset. The former database is an audiovisual one of around 600
videos of around 3M frames and is annotated with respect to:a) two continuous
affect dimensions -valence (how positive/negative a person is) and arousal (how
active/passive a person is)-; b) basic expressions (e.g. happiness, sadness,
neutral state); and c) atomic facial muscle actions (i.e., action units). The
latter dataset is an audiovisual one in which reactions of individuals to
emotional stimuli have been annotated with respect to seven emotional
expression intensities. Thus the 5th ABAW Competition encompasses four
Challenges: i) uni-task Valence-Arousal Estimation, ii) uni-task Expression
Classification, iii) uni-task Action Unit Detection, and iv) Emotional Reaction
Intensity Estimation. In this paper, we present these Challenges, along with
their corpora, we outline the evaluation metrics, we present the baseline
systems and illustrate their obtained performance.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 18:58:15 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Mar 2023 04:49:02 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Mar 2023 15:25:09 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Kollias",
"Dimitrios",
""
],
[
"Tzirakis",
"Panagiotis",
""
],
[
"Baird",
"Alice",
""
],
[
"Cowen",
"Alan",
""
],
[
"Zafeiriou",
"Stefanos",
""
]
] |
new_dataset
| 0.951045 |
2303.03202
|
Lianyu Hu
|
Lianyu Hu, Liqing Gao, Zekang Liu, Wei Feng
|
Continuous Sign Language Recognition with Correlation Network
|
CVPR2023, Camera ready version. code:
https://github.com/hulianyuyy/CorrNet. Made few modifications on
explanations. arXiv admin note: text overlap with arXiv:2211.17081
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Human body trajectories are a salient cue to identify actions in the video.
Such body trajectories are mainly conveyed by hands and face across consecutive
frames in sign language. However, current methods in continuous sign language
recognition (CSLR) usually process frames independently, thus failing to
capture cross-frame trajectories to effectively identify a sign. To handle this
limitation, we propose correlation network (CorrNet) to explicitly capture and
leverage body trajectories across frames to identify signs. In specific, a
correlation module is first proposed to dynamically compute correlation maps
between the current frame and adjacent frames to identify trajectories of all
spatial patches. An identification module is then presented to dynamically
emphasize the body trajectories within these correlation maps. As a result, the
generated features are able to gain an overview of local temporal movements to
identify a sign. Thanks to its special attention on body trajectories, CorrNet
achieves new state-of-the-art accuracy on four large-scale datasets, i.e.,
PHOENIX14, PHOENIX14-T, CSL-Daily, and CSL. A comprehensive comparison with
previous spatial-temporal reasoning methods verifies the effectiveness of
CorrNet. Visualizations demonstrate the effects of CorrNet on emphasizing human
body trajectories across adjacent frames.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 15:02:12 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Mar 2023 14:21:22 GMT"
},
{
"version": "v3",
"created": "Sat, 18 Mar 2023 12:31:42 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Hu",
"Lianyu",
""
],
[
"Gao",
"Liqing",
""
],
[
"Liu",
"Zekang",
""
],
[
"Feng",
"Wei",
""
]
] |
new_dataset
| 0.981392 |
2303.05075
|
Muqing Cao Mr
|
Muqing Cao, Xinhang Xu, Shenghai Yuan, Kun Cao, Kangcheng Liu, Lihua
Xie
|
DoubleBee: A Hybrid Aerial-Ground Robot with Two Active Wheels
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the dynamic model and control of DoubleBee, a novel hybrid
aerial-ground vehicle consisting of two propellers mounted on tilting servo
motors and two motor-driven wheels. DoubleBee exploits the high energy
efficiency of a bicopter configuration in aerial mode, and enjoys the low power
consumption of a two-wheel self-balancing robot on the ground. Furthermore, the
propeller thrusts act as additional control inputs on the ground, enabling a
novel decoupled control scheme where the attitude of the robot is controlled
using thrusts and the translational motion is realized using wheels. A
prototype of DoubleBee is constructed using commercially available components.
The power efficiency and the control performance of the robot are verified
through comprehensive experiments. Challenging tasks in indoor and outdoor
environments demonstrate the capability of DoubleBee to traverse unstructured
environments, fly over and move under barriers, and climb steep and rough
terrains.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 07:16:07 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Mar 2023 07:06:07 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Cao",
"Muqing",
""
],
[
"Xu",
"Xinhang",
""
],
[
"Yuan",
"Shenghai",
""
],
[
"Cao",
"Kun",
""
],
[
"Liu",
"Kangcheng",
""
],
[
"Xie",
"Lihua",
""
]
] |
new_dataset
| 0.999768 |
2303.05177
|
Mohamed Behery
|
Mohamed Behery, Minh Trinh, Christian Brecher, Gerhard Lakemeyer
|
Assistive Robot Teleoperation Using Behavior Trees
|
VAT@HRI 2023 Workshop
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Robotic assistance in robot arm teleoperation tasks has recently gained a lot
of traction in industrial and domestic environment. A wide variety of input
devices is used in such setups. Due to the noise in the input signals (e.g.,
Brain Computer Interface (BCI)) or delays due to environmental conditions
(e.g., space robot teleoperation), users need assistive autonomy that keeps
them in control while following predefined trajectories and avoids obstacles.
This assistance calls for activity representations that are easy to define by
the operator and able to take the dynamic world state into consideration. This
paper represents Activities of Daily Living using Behavior Trees (BTs) whose
inherent readability and modularity enables an end user to define new
activities using a simple interface. To achieve this, we augment BTs with
Shared Control Action Nodes, which guide the user's input on a trajectory
facilitating and ensuring task execution.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 11:13:15 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Mar 2023 23:25:46 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Behery",
"Mohamed",
""
],
[
"Trinh",
"Minh",
""
],
[
"Brecher",
"Christian",
""
],
[
"Lakemeyer",
"Gerhard",
""
]
] |
new_dataset
| 0.996089 |
2303.07601
|
Runsheng Xu
|
Runsheng Xu, Xin Xia, Jinlong Li, Hanzhao Li, Shuo Zhang, Zhengzhong
Tu, Zonglin Meng, Hao Xiang, Xiaoyu Dong, Rui Song, Hongkai Yu, Bolei Zhou,
Jiaqi Ma
|
V2V4Real: A Real-world Large-scale Dataset for Vehicle-to-Vehicle
Cooperative Perception
|
Accepted by CVPR2023. Website link:
https://research.seas.ucla.edu/mobility-lab/v2v4real
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Modern perception systems of autonomous vehicles are known to be sensitive to
occlusions and lack the capability of long perceiving range. It has been one of
the key bottlenecks that prevents Level 5 autonomy. Recent research has
demonstrated that the Vehicle-to-Vehicle (V2V) cooperative perception system
has great potential to revolutionize the autonomous driving industry. However,
the lack of a real-world dataset hinders the progress of this field. To
facilitate the development of cooperative perception, we present V2V4Real, the
first large-scale real-world multi-modal dataset for V2V perception. The data
is collected by two vehicles equipped with multi-modal sensors driving together
through diverse scenarios. Our V2V4Real dataset covers a driving area of 410
km, comprising 20K LiDAR frames, 40K RGB frames, 240K annotated 3D bounding
boxes for 5 classes, and HDMaps that cover all the driving routes. V2V4Real
introduces three perception tasks, including cooperative 3D object detection,
cooperative 3D object tracking, and Sim2Real domain adaptation for cooperative
perception. We provide comprehensive benchmarks of recent cooperative
perception algorithms on three tasks. The V2V4Real dataset can be found at
https://research.seas.ucla.edu/mobility-lab/v2v4real/.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 02:49:20 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Mar 2023 23:01:50 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Xu",
"Runsheng",
""
],
[
"Xia",
"Xin",
""
],
[
"Li",
"Jinlong",
""
],
[
"Li",
"Hanzhao",
""
],
[
"Zhang",
"Shuo",
""
],
[
"Tu",
"Zhengzhong",
""
],
[
"Meng",
"Zonglin",
""
],
[
"Xiang",
"Hao",
""
],
[
"Dong",
"Xiaoyu",
""
],
[
"Song",
"Rui",
""
],
[
"Yu",
"Hongkai",
""
],
[
"Zhou",
"Bolei",
""
],
[
"Ma",
"Jiaqi",
""
]
] |
new_dataset
| 0.999704 |
2303.08419
|
Jun-Hwa Kim
|
Jun-Hwa Kim, Namho Kim, Chee Sun Won
|
Multi Modal Facial Expression Recognition with Transformer-Based Fusion
Networks and Dynamic Sampling
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Facial expression recognition is an essential task for various applications,
including emotion detection, mental health analysis, and human-machine
interactions. In this paper, we propose a multi-modal facial expression
recognition method that exploits audio information along with facial images to
provide a crucial clue to differentiate some ambiguous facial expressions.
Specifically, we introduce a Modal Fusion Module (MFM) to fuse audio-visual
information, where image and audio features are extracted from Swin
Transformer. Additionally, we tackle the imbalance problem in the dataset by
employing dynamic data resampling. Our model has been evaluated in the
Affective Behavior in-the-wild (ABAW) challenge of CVPR 2023.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 07:40:28 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Mar 2023 04:47:43 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Kim",
"Jun-Hwa",
""
],
[
"Kim",
"Namho",
""
],
[
"Won",
"Chee Sun",
""
]
] |
new_dataset
| 0.998127 |
2303.08536
|
Yong Man Ro
|
Joanna Hong, Minsu Kim, Jeongsoo Choi, Yong Man Ro
|
Watch or Listen: Robust Audio-Visual Speech Recognition with Visual
Corruption Modeling and Reliability Scoring
|
Accepted at CVPR 2023. Implementation available:
https://github.com/joannahong/AV-RelScore
| null | null | null |
cs.MM cs.CV cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper deals with Audio-Visual Speech Recognition (AVSR) under multimodal
input corruption situations where audio inputs and visual inputs are both
corrupted, which is not well addressed in previous research directions.
Previous studies have focused on how to complement the corrupted audio inputs
with the clean visual inputs with the assumption of the availability of clean
visual inputs. However, in real life, clean visual inputs are not always
accessible and can even be corrupted by occluded lip regions or noises. Thus,
we firstly analyze that the previous AVSR models are not indeed robust to the
corruption of multimodal input streams, the audio and the visual inputs,
compared to uni-modal models. Then, we design multimodal input corruption
modeling to develop robust AVSR models. Lastly, we propose a novel AVSR
framework, namely Audio-Visual Reliability Scoring module (AV-RelScore), that
is robust to the corrupted multimodal inputs. The AV-RelScore can determine
which input modal stream is reliable or not for the prediction and also can
exploit the more reliable streams in prediction. The effectiveness of the
proposed method is evaluated with comprehensive experiments on popular
benchmark databases, LRS2 and LRS3. We also show that the reliability scores
obtained by AV-RelScore well reflect the degree of corruption and make the
proposed model focus on the reliable multimodal representations.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 11:29:36 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Mar 2023 07:01:45 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Hong",
"Joanna",
""
],
[
"Kim",
"Minsu",
""
],
[
"Choi",
"Jeongsoo",
""
],
[
"Ro",
"Yong Man",
""
]
] |
new_dataset
| 0.95203 |
2303.09095
|
Yudi Dai
|
Yudi Dai (1), Yitai Lin (1), Xiping Lin (2), Chenglu Wen (1), Lan Xu
(2), Hongwei Yi (3), Siqi Shen (1), Yuexin Ma (2), Cheng Wang (1) ((1) Xiamen
University, China, (2) ShanghaiTech University, China, (3) Max Planck
Institute for Intelligent Systems, Germany)
|
SLOPER4D: A Scene-Aware Dataset for Global 4D Human Pose Estimation in
Urban Environments
|
11 pages,7 figures, CVPR2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present SLOPER4D, a novel scene-aware dataset collected in large urban
environments to facilitate the research of global human pose estimation (GHPE)
with human-scene interaction in the wild. Employing a head-mounted device
integrated with a LiDAR and camera, we record 12 human subjects' activities
over 10 diverse urban scenes from an egocentric view. Frame-wise annotations
for 2D key points, 3D pose parameters, and global translations are provided,
together with reconstructed scene point clouds. To obtain accurate 3D ground
truth in such large dynamic scenes, we propose a joint optimization method to
fit local SMPL meshes to the scene and fine-tune the camera calibration during
dynamic motions frame by frame, resulting in plausible and scene-natural 3D
human poses. Eventually, SLOPER4D consists of 15 sequences of human motions,
each of which has a trajectory length of more than 200 meters (up to 1,300
meters) and covers an area of more than 2,000 $m^2$ (up to 13,000 $m^2$),
including more than 100K LiDAR frames, 300k video frames, and 500K IMU-based
motion frames. With SLOPER4D, we provide a detailed and thorough analysis of
two critical tasks, including camera-based 3D HPE and LiDAR-based 3D HPE in
urban environments, and benchmark a new task, GHPE. The in-depth analysis
demonstrates SLOPER4D poses significant challenges to existing methods and
produces great research opportunities. The dataset and code are released at
\url{http://www.lidarhumanmotion.net/sloper4d/}
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 05:54:15 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Mar 2023 13:44:08 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Dai",
"Yudi",
""
],
[
"Lin",
"Yitai",
""
],
[
"Lin",
"Xiping",
""
],
[
"Wen",
"Chenglu",
""
],
[
"Xu",
"Lan",
""
],
[
"Yi",
"Hongwei",
""
],
[
"Shen",
"Siqi",
""
],
[
"Ma",
"Yuexin",
""
],
[
"Wang",
"Cheng",
""
]
] |
new_dataset
| 0.999825 |
2303.09339
|
Stefan Larson
|
Alexander Groleau, Kok Wei Chee, Stefan Larson, Samay Maini, Jonathan
Boarman
|
ShabbyPages: A Reproducible Document Denoising and Binarization Dataset
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Document denoising and binarization are fundamental problems in the document
processing space, but current datasets are often too small and lack sufficient
complexity to effectively train and benchmark modern data-driven machine
learning models. To fill this gap, we introduce ShabbyPages, a new document
image dataset designed for training and benchmarking document denoisers and
binarizers. ShabbyPages contains over 6,000 clean "born digital" images with
synthetically-noised counterparts ("shabby pages") that were augmented using
the Augraphy document augmentation tool to appear as if they have been printed
and faxed, photocopied, or otherwise altered through physical processes. In
this paper, we discuss the creation process of ShabbyPages and demonstrate the
utility of ShabbyPages by training convolutional denoisers which remove real
noise features with a high degree of human-perceptible fidelity, establishing
baseline performance for a new ShabbyPages benchmark.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 14:19:50 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Mar 2023 19:48:36 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Groleau",
"Alexander",
""
],
[
"Chee",
"Kok Wei",
""
],
[
"Larson",
"Stefan",
""
],
[
"Maini",
"Samay",
""
],
[
"Boarman",
"Jonathan",
""
]
] |
new_dataset
| 0.999833 |
2303.10174
|
Jialiang Tan
|
Jialiang Tan, Yu Chen, Shuyin Jiao
|
Visual Studio Code in Introductory Computer Science Course: An
Experience Report
| null | null | null | null |
cs.HC cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Involving integrated development environments (IDEs) in introductory-level
(CS1) programming courses is critical. However, it is difficult for instructors
to find a suitable IDE that is beginner friendly and supports strong
functionality. In this paper, we report the experience of using Visual Studio
Code (VS Code) in a CS1 programming course. We describe our motivation for
choosing VS Code and how we introduce it to students. We create comprehensive
guidance with hierarchical indexing to help students with diverse programming
backgrounds. We perform an experimental evaluation of students' programming
experience of using VS Code and validate the VS Code together with guidance as
a promising solution for CS1 programming courses.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 03:19:25 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Tan",
"Jialiang",
""
],
[
"Chen",
"Yu",
""
],
[
"Jiao",
"Shuyin",
""
]
] |
new_dataset
| 0.969063 |
2303.10179
|
Koichiro Yawata
|
Koichiro Yawata, Yoshihiro Osakabe, Takuya Okuyama, Akinori Asahara
|
QUBO-inspired Molecular Fingerprint for Chemical Property Prediction
|
2022 IEEE International Conference on Big Data (Big Data). arXiv
admin note: substantial text overlap with arXiv:2303.09772
| null |
10.1109/BigData55660.2022.10020236
| null |
cs.LG cs.LO q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Molecular fingerprints are widely used for predicting chemical properties,
and selecting appropriate fingerprints is important. We generate new
fingerprints based on the assumption that a performance of prediction using a
more effective fingerprint is better. We generate effective interaction
fingerprints that are the product of multiple base fingerprints. It is
difficult to evaluate all combinations of interaction fingerprints because of
computational limitations. Against this problem, we transform a problem of
searching more effective interaction fingerprints into a quadratic
unconstrained binary optimization problem. In this study, we found effective
interaction fingerprints using QM9 dataset.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 04:40:49 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Yawata",
"Koichiro",
""
],
[
"Osakabe",
"Yoshihiro",
""
],
[
"Okuyama",
"Takuya",
""
],
[
"Asahara",
"Akinori",
""
]
] |
new_dataset
| 0.987117 |
2303.10230
|
Yong Zheng
|
Yong Zheng
|
ITM-Rec: An Open Data Set for Educational Recommender Systems
| null | null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With the development of recommender systems (RS), several promising systems
have emerged, such as context-aware RS, multi-criteria RS, and group RS.
However, the education domain may not benefit from these developments due to
missing information, such as contexts and multiple criteria, in educational
data sets. In this paper, we announce and release an open data set for
educational recommender systems. This data set includes not only traditional
rating entries, but also enriched information, e.g., contexts, user preferences
in multiple criteria, group compositions and preferences, etc. It provides a
testbed and enables more opportunities to develop and examine various
educational recommender systems.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 20:08:59 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Zheng",
"Yong",
""
]
] |
new_dataset
| 0.967162 |
2303.10247
|
Jiri Matas
|
David Korcak and Jiri Matas
|
Video shutter angle estimation using optical flow and linear blur
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a method for estimating the shutter angle, a.k.a. exposure
fraction -- the ratio of the exposure time and the reciprocal of frame rate --
of videoclips containing motion. The approach exploits the relation of the
exposure fraction, optical flow, and linear motion blur. Robustness is achieved
by selecting image patches where both the optical flow and blur estimates are
reliable, checking their consistency. The method was evaluated on the publicly
available Beam-Splitter Dataset with a range of exposure fractions from 0.015
to 0.36. The best achieved mean absolute error of estimates was 0.039. We
successfully test the suitability of the method for a forensic application of
detection of video tampering by frame removal or insertion.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 20:54:04 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Korcak",
"David",
""
],
[
"Matas",
"Jiri",
""
]
] |
new_dataset
| 0.994976 |
2303.10288
|
Jun Zhao
|
Terence Jie Chua, Wenhan Yu, Jun Zhao
|
Mobile Edge Adversarial Detection for Digital Twinning to the Metaverse
with Deep Reinforcement Learning
|
This paper appears in IEEE International Conference on
Communications, 2023
| null | null | null |
cs.NI cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Real-time Digital Twinning of physical world scenes onto the Metaverse is
necessary for a myriad of applications such as augmented-reality (AR) assisted
driving. In AR assisted driving, physical environment scenes are first captured
by Internet of Vehicles (IoVs) and are uploaded to the Metaverse. A central
Metaverse Map Service Provider (MMSP) will aggregate information from all IoVs
to develop a central Metaverse Map. Information from the Metaverse Map can then
be downloaded into individual IoVs on demand and be delivered as AR scenes to
the driver. However, the growing interest in developing AR assisted driving
applications which relies on digital twinning invites adversaries. These
adversaries may place physical adversarial patches on physical world objects
such as cars, signboards, or on roads, seeking to contort the virtual world
digital twin. Hence, there is a need to detect these physical world adversarial
patches. Nevertheless, as real-time, accurate detection of adversarial patches
is compute-intensive, these physical world scenes have to be offloaded to the
Metaverse Map Base Stations (MMBS) for computation. Hence in our work, we
considered an environment with moving Internet of Vehicles (IoV), uploading
real-time physical world scenes to the MMBSs. We formulated a realistic joint
variable optimization problem where the MMSPs' objective is to maximize
adversarial patch detection mean average precision (mAP), while minimizing the
computed AR scene up-link transmission latency and IoVs' up-link transmission
idle count, through optimizing the IoV-MMBS allocation and IoV up-link scene
resolution selection. We proposed a Heterogeneous Action Proximal Policy
Optimization (HAPPO) (discrete-continuous) algorithm to tackle the proposed
problem. Extensive experiments shows HAPPO outperforms baseline models when
compared against key metrics.
|
[
{
"version": "v1",
"created": "Sat, 18 Mar 2023 00:03:50 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Chua",
"Terence Jie",
""
],
[
"Yu",
"Wenhan",
""
],
[
"Zhao",
"Jun",
""
]
] |
new_dataset
| 0.997834 |
2303.10311
|
Punyajoy Saha
|
Punyajoy Saha, Kiran Garimella, Narla Komal Kalyan, Saurabh Kumar
Pandey, Pauras Mangesh Meher, Binny Mathew, and Animesh Mukherjee
|
On the rise of fear speech in online social media
|
16 pages, 9 tables, 15 figures, accepted in Proceedings of the
National Academy of Sciences of the United States of America
| null |
10.1073/pnas.2212270120
| null |
cs.SI cs.CL cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recently, social media platforms are heavily moderated to prevent the spread
of online hate speech, which is usually fertile in toxic words and is directed
toward an individual or a community. Owing to such heavy moderation, newer and
more subtle techniques are being deployed. One of the most striking among these
is fear speech. Fear speech, as the name suggests, attempts to incite fear
about a target community. Although subtle, it might be highly effective, often
pushing communities toward a physical conflict. Therefore, understanding their
prevalence in social media is of paramount importance. This article presents a
large-scale study to understand the prevalence of 400K fear speech and over
700K hate speech posts collected from Gab.com. Remarkably, users posting a
large number of fear speech accrue more followers and occupy more central
positions in social networks than users posting a large number of hate speech.
They can also reach out to benign users more effectively than hate speech users
through replies, reposts, and mentions. This connects to the fact that, unlike
hate speech, fear speech has almost zero toxic content, making it look
plausible. Moreover, while fear speech topics mostly portray a community as a
perpetrator using a (fake) chain of argumentation, hate speech topics hurl
direct multitarget insults, thus pointing to why general users could be more
gullible to fear speech. Our findings transcend even to other platforms
(Twitter and Facebook) and thus necessitate using sophisticated moderation
policies and mass awareness to combat fear speech.
|
[
{
"version": "v1",
"created": "Sat, 18 Mar 2023 02:46:49 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Saha",
"Punyajoy",
""
],
[
"Garimella",
"Kiran",
""
],
[
"Kalyan",
"Narla Komal",
""
],
[
"Pandey",
"Saurabh Kumar",
""
],
[
"Meher",
"Pauras Mangesh",
""
],
[
"Mathew",
"Binny",
""
],
[
"Mukherjee",
"Animesh",
""
]
] |
new_dataset
| 0.971836 |
2303.10321
|
Peiwen Pan
|
Peiwen Pan, Huan Wang, Chenyi Wang, Chang Nie
|
ABC: Attention with Bilinear Correlation for Infrared Small Target
Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Infrared small target detection (ISTD) has a wide range of applications in
early warning, rescue, and guidance. However, CNN based deep learning methods
are not effective at segmenting infrared small target (IRST) that it lack of
clear contour and texture features, and transformer based methods also struggle
to achieve significant results due to the absence of convolution induction
bias. To address these issues, we propose a new model called attention with
bilinear correlation (ABC), which is based on the transformer architecture and
includes a convolution linear fusion transformer (CLFT) module with a novel
attention mechanism for feature extraction and fusion, which effectively
enhances target features and suppresses noise. Additionally, our model includes
a u-shaped convolution-dilated convolution (UCDC) module located deeper layers
of the network, which takes advantage of the smaller resolution of deeper
features to obtain finer semantic information. Experimental results on public
datasets demonstrate that our approach achieves state-of-the-art performance.
Code is available at https://github.com/PANPEIWEN/ABC
|
[
{
"version": "v1",
"created": "Sat, 18 Mar 2023 03:47:06 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Pan",
"Peiwen",
""
],
[
"Wang",
"Huan",
""
],
[
"Wang",
"Chenyi",
""
],
[
"Nie",
"Chang",
""
]
] |
new_dataset
| 0.992821 |
2303.10325
|
Guandong Li
|
Guandong Li, Xian Yang
|
Smartbanner: Intelligent banner design framework that strikes a balance
between creative freedom and design rules
| null |
Published 23 November 2022 Art Multimedia Tools and Applications
|
10.1007/s11042-022-14138-7
| null |
cs.HC cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Companies use banners extensively to promote their products, and the
intelligent automatic synthesis of banners is a challenging event. Under the
premise of inputting only a small amount of information such as product, text
and size, it can synthesize styles with high freedom and richness, but at the
same time, it must satisfy the design specifications of advertisers for
advertising and scenes. We propose an intelligent banner design framework that
strikes a balance between creative freedom and design rules, called
smartbanner. Smartbanner consists of planner, actuator, adjuster and generator.
The banner is synthesized through the combined framework, which fully liberates
the designer and reduces the threshold and cost of design. It increases the
click-through rate by 30%, improves the human efficiency of designers by 500%
under the condition of ensuring the quality of creation, and synthesizes
hundreds of millions of pictures in batches throughout the year.
|
[
{
"version": "v1",
"created": "Sat, 18 Mar 2023 04:01:53 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Li",
"Guandong",
""
],
[
"Yang",
"Xian",
""
]
] |
new_dataset
| 0.999031 |
2303.10361
|
Yucheng Ding
|
Yucheng Ding, Chaoyue Niu, Fan Wu, Shaojie Tang, Chengfei Lyu, Guihai
Chen
|
DC-CCL: Device-Cloud Collaborative Controlled Learning for Large Vision
Models
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many large vision models have been deployed on the cloud for real-time
services. Meanwhile, fresh samples are continuously generated on the served
mobile device. How to leverage the device-side samples to improve the
cloud-side large model becomes a practical requirement, but falls into the
dilemma of no raw sample up-link and no large model down-link. Specifically,
the user may opt out of sharing raw samples with the cloud due to the concern
of privacy or communication overhead, while the size of some large vision
models far exceeds the mobile device's runtime capacity. In this work, we
propose a device-cloud collaborative controlled learning framework, called
DC-CCL, enabling a cloud-side large vision model that cannot be directly
deployed on the mobile device to still benefit from the device-side local
samples. In particular, DC-CCL vertically splits the base model into two
submodels, one large submodel for learning from the cloud-side samples and the
other small submodel for learning from the device-side samples and performing
device-cloud knowledge fusion. Nevertheless, on-device training of the small
submodel requires the output of the cloud-side large submodel to compute the
desired gradients. DC-CCL thus introduces a light-weight model to mimic the
large cloud-side submodel with knowledge distillation, which can be offloaded
to the mobile device to control its small submodel's optimization direction.
Given the decoupling nature of two submodels in collaborative learning, DC-CCL
also allows the cloud to take a pre-trained model and the mobile device to take
another model with a different backbone architecture.
|
[
{
"version": "v1",
"created": "Sat, 18 Mar 2023 08:35:12 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Ding",
"Yucheng",
""
],
[
"Niu",
"Chaoyue",
""
],
[
"Wu",
"Fan",
""
],
[
"Tang",
"Shaojie",
""
],
[
"Lyu",
"Chengfei",
""
],
[
"Chen",
"Guihai",
""
]
] |
new_dataset
| 0.960461 |
2303.10391
|
Andrzej Pelc
|
Andrzej Pelc
|
Deterministic Rendezvous Algorithms
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
The task of rendezvous (also called {\em gathering}) calls for a meeting of
two or more mobile entities, starting from different positions in some
environment. Those entities are called mobile agents or robots, and the
environment can be a network modeled as a graph or a terrain in the plane,
possibly with obstacles. The rendezvous problem has been studied in many
different scenarios. Two among many adopted assumptions particularly influence
the methodology to be used to accomplish rendezvous. One of the assumptions
specifies whether the agents in their navigation can see something apart from
parts of the environment itself, for example other agents or marks left by
them. The other assumption concerns the way in which the entities move: it can
be either deterministic or randomized. In this paper we survey results on
deterministic rendezvous of agents that cannot see the other agents prior to
meeting them, and cannot leave any marks.
|
[
{
"version": "v1",
"created": "Sat, 18 Mar 2023 10:54:38 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Pelc",
"Andrzej",
""
]
] |
new_dataset
| 0.999313 |
2303.10443
|
Yuntao Wang
|
Jiexin Ding, Bowen Zhao, Yuqi Huang, Yuntao Wang, Yuanchun Shi
|
GazeReader: Detecting Unknown Word Using Webcam for English as a Second
Language (ESL) Learners
|
This paper has been accepted by ACM CHI 2023
| null | null | null |
cs.HC cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Automatic unknown word detection techniques can enable new applications for
assisting English as a Second Language (ESL) learners, thus improving their
reading experiences. However, most modern unknown word detection methods
require dedicated eye-tracking devices with high precision that are not easily
accessible to end-users. In this work, we propose GazeReader, an unknown word
detection method only using a webcam. GazeReader tracks the learner's gaze and
then applies a transformer-based machine learning model that encodes the text
information to locate the unknown word. We applied knowledge enhancement
including term frequency, part of speech, and named entity recognition to
improve the performance. The user study indicates that the accuracy and
F1-score of our method were 98.09% and 75.73%, respectively. Lastly, we
explored the design scope for ESL reading and discussed the findings.
|
[
{
"version": "v1",
"created": "Sat, 18 Mar 2023 15:55:49 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Ding",
"Jiexin",
""
],
[
"Zhao",
"Bowen",
""
],
[
"Huang",
"Yuqi",
""
],
[
"Wang",
"Yuntao",
""
],
[
"Shi",
"Yuanchun",
""
]
] |
new_dataset
| 0.997818 |
2303.10515
|
Hanliang Zhang
|
Hanliang Zhang, Cristina David, Yijun Yu, Meng Wang
|
Ownership guided C to Rust translation
| null | null | null | null |
cs.PL cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Dubbed a safer C, Rust is a modern programming language that combines memory
safety and low-level control. This interesting combination has made Rust very
popular among developers and there is a growing trend of migrating legacy
codebases (very often in C) to Rust. In this paper, we present a C to Rust
translation approach centred around static ownership analysis. We design a
suite of analyses that infer ownership models of C pointers and automatically
translate the pointers into safe Rust equivalents. The resulting tool, Crown,
scales to real-world codebases (half a million lines of code in less than 10
seconds) and achieves a high conversion rate.
|
[
{
"version": "v1",
"created": "Sat, 18 Mar 2023 23:14:04 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Zhang",
"Hanliang",
""
],
[
"David",
"Cristina",
""
],
[
"Yu",
"Yijun",
""
],
[
"Wang",
"Meng",
""
]
] |
new_dataset
| 0.996122 |
2303.10546
|
Andr\'es Monroy-Hern\'andez
|
Samantha Reig, Erica Principe Cruz, Melissa M. Powers, Jennifer He,
Timothy Chong, Yu Jiang Tham, Sven Kratz, Ava Robinson, Brian A. Smith, Rajan
Vaish and Andr\'es Monroy-Hern\'andez
|
Supporting Piggybacked Co-Located Leisure Activities via Augmented
Reality
| null | null | null | null |
cs.HC cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Technology, especially the smartphone, is villainized for taking meaning and
time away from in-person interactions and secluding people into "digital
bubbles". We believe this is not an intrinsic property of digital gadgets, but
evidence of a lack of imagination in technology design. Leveraging augmented
reality (AR) toward this end allows us to create experiences for multiple
people, their pets, and their environments. In this work, we explore the design
of AR technology that "piggybacks" on everyday leisure to foster co-located
interactions among close ties (with other people and pets. We designed,
developed, and deployed three such AR applications, and evaluated them through
a 41-participant and 19-pet user study. We gained key insights about the
ability of AR to spur and enrich interaction in new channels, the importance of
customization, and the challenges of designing for the physical aspects of AR
devices (e.g., holding smartphones). These insights guide design implications
for the novel research space of co-located AR.
|
[
{
"version": "v1",
"created": "Sun, 19 Mar 2023 03:09:08 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Reig",
"Samantha",
""
],
[
"Cruz",
"Erica Principe",
""
],
[
"Powers",
"Melissa M.",
""
],
[
"He",
"Jennifer",
""
],
[
"Chong",
"Timothy",
""
],
[
"Tham",
"Yu Jiang",
""
],
[
"Kratz",
"Sven",
""
],
[
"Robinson",
"Ava",
""
],
[
"Smith",
"Brian A.",
""
],
[
"Vaish",
"Rajan",
""
],
[
"Monroy-Hernández",
"Andrés",
""
]
] |
new_dataset
| 0.950571 |
2303.10560
|
Yinping Yang Dr
|
Brandon Siyuan Loh, Raj Kumar Gupta, Ajay Vishwanath, Andrew Ortony,
Yinping Yang
|
How People Respond to the COVID-19 Pandemic on Twitter: A Comparative
Analysis of Emotional Expressions from US and India
|
13 pages, 3 figures, 1 table, 2 appendices
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The COVID-19 pandemic has claimed millions of lives worldwide and elicited
heightened emotions. This study examines the expression of various emotions
pertaining to COVID-19 in the United States and India as manifested in over 54
million tweets, covering the fifteen-month period from February 2020 through
April 2021, a period which includes the beginnings of the huge and disastrous
increase in COVID-19 cases that started to ravage India in March 2021.
Employing pre-trained emotion analysis and topic modeling algorithms, four
distinct types of emotions (fear, anger, happiness, and sadness) and their
time- and location-associated variations were examined. Results revealed
significant country differences and temporal changes in the relative
proportions of fear, anger, and happiness, with fear declining and anger and
happiness fluctuating in 2020 until new situations over the first four months
of 2021 reversed the trends. Detected differences are discussed briefly in
terms of the latent topics revealed and through the lens of appraisal theories
of emotions, and the implications of the findings are discussed.
|
[
{
"version": "v1",
"created": "Sun, 19 Mar 2023 04:05:10 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Loh",
"Brandon Siyuan",
""
],
[
"Gupta",
"Raj Kumar",
""
],
[
"Vishwanath",
"Ajay",
""
],
[
"Ortony",
"Andrew",
""
],
[
"Yang",
"Yinping",
""
]
] |
new_dataset
| 0.989578 |
2303.10571
|
Zongqing Lu
|
Ziluo Ding, Hao Luo, Ke Li, Junpeng Yue, Tiejun Huang, and Zongqing Lu
|
CLIP4MC: An RL-Friendly Vision-Language Model for Minecraft
| null | null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the essential missions in the AI research community is to build an
autonomous embodied agent that can attain high-level performance across a wide
spectrum of tasks. However, acquiring reward/penalty in all open-ended tasks is
unrealistic, making the Reinforcement Learning (RL) training procedure
impossible. In this paper, we propose a novel cross-modal contrastive learning
framework architecture, CLIP4MC, aiming to learn an RL-friendly vision-language
model that serves as a reward function for open-ended tasks. Therefore, no
further task-specific reward design is needed. Intuitively, it is more
reasonable for the model to address the similarity between the video snippet
and the language prompt at both the action and entity levels. To this end, a
motion encoder is proposed to capture the motion embeddings across different
intervals. The correlation scores are then used to construct the auxiliary
reward signal for RL agents. Moreover, we construct a neat YouTube dataset
based on the large-scale YouTube database provided by MineDojo. Specifically,
two rounds of filtering operations guarantee that the dataset covers enough
essential information and that the video-text pair is highly correlated.
Empirically, we show that the proposed method achieves better performance on RL
tasks compared with baselines.
|
[
{
"version": "v1",
"created": "Sun, 19 Mar 2023 05:20:52 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Ding",
"Ziluo",
""
],
[
"Luo",
"Hao",
""
],
[
"Li",
"Ke",
""
],
[
"Yue",
"Junpeng",
""
],
[
"Huang",
"Tiejun",
""
],
[
"Lu",
"Zongqing",
""
]
] |
new_dataset
| 0.999212 |
2303.10612
|
H.A.Z.Sameen Shahgir
|
H.A.Z. Sameen Shahgir, Khondker Salman Sayeed
|
Bangla Grammatical Error Detection Using T5 Transformer Model
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a method for detecting grammatical errors in Bangla using
a Text-to-Text Transfer Transformer (T5) Language Model, using the small
variant of BanglaT5, fine-tuned on a corpus of 9385 sentences where errors were
bracketed by the dedicated demarcation symbol. The T5 model was primarily
designed for translation and is not specifically designed for this task, so
extensive post-processing was necessary to adapt it to the task of error
detection. Our experiments show that the T5 model can achieve low Levenshtein
Distance in detecting grammatical errors in Bangla, but post-processing is
essential to achieve optimal performance. The final average Levenshtein
Distance after post-processing the output of the fine-tuned model was 1.0394 on
a test set of 5000 sentences. This paper also presents a detailed analysis of
the errors detected by the model and discusses the challenges of adapting a
translation model for grammar. Our approach can be extended to other languages,
demonstrating the potential of T5 models for detecting grammatical errors in a
wide range of languages.
|
[
{
"version": "v1",
"created": "Sun, 19 Mar 2023 09:24:48 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Shahgir",
"H. A. Z. Sameen",
""
],
[
"Sayeed",
"Khondker Salman",
""
]
] |
new_dataset
| 0.950414 |
2303.10613
|
Pu Li
|
Pu Li, Jianwei Guo, Xiaopeng Zhang, Dong-ming Yan
|
SECAD-Net: Self-Supervised CAD Reconstruction by Learning Sketch-Extrude
Operations
| null | null | null | null |
cs.CV cs.GR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Reverse engineering CAD models from raw geometry is a classic but strenuous
research problem. Previous learning-based methods rely heavily on labels due to
the supervised design patterns or reconstruct CAD shapes that are not easily
editable. In this work, we introduce SECAD-Net, an end-to-end neural network
aimed at reconstructing compact and easy-to-edit CAD models in a
self-supervised manner. Drawing inspiration from the modeling language that is
most commonly used in modern CAD software, we propose to learn 2D sketches and
3D extrusion parameters from raw shapes, from which a set of extrusion
cylinders can be generated by extruding each sketch from a 2D plane into a 3D
body. By incorporating the Boolean operation (i.e., union), these cylinders can
be combined to closely approximate the target geometry. We advocate the use of
implicit fields for sketch representation, which allows for creating CAD
variations by interpolating latent codes in the sketch latent space. Extensive
experiments on both ABC and Fusion 360 datasets demonstrate the effectiveness
of our method, and show superiority over state-of-the-art alternatives
including the closely related method for supervised CAD reconstruction. We
further apply our approach to CAD editing and single-view CAD reconstruction.
The code is released at https://github.com/BunnySoCrazy/SECAD-Net.
|
[
{
"version": "v1",
"created": "Sun, 19 Mar 2023 09:26:03 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Li",
"Pu",
""
],
[
"Guo",
"Jianwei",
""
],
[
"Zhang",
"Xiaopeng",
""
],
[
"Yan",
"Dong-ming",
""
]
] |
new_dataset
| 0.970945 |
2303.10674
|
Hongmeng Liu
|
Hongmeng Liu, Jiapeng Zhao, Yixuan Huo, Yuyan Wang, Chun Liao, Liyan
Shen, Shiyao Cui, Jinqiao Shi
|
URM4DMU: an user represention model for darknet markets users
|
9pages
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Darknet markets provide a large platform for trading illicit goods and
services due to their anonymity. Learning an invariant representation of each
user based on their posts on different markets makes it easy to aggregate user
information across different platforms, which helps identify anonymous users.
Traditional user representation methods mainly rely on modeling the text
information of posts and cannot capture the temporal content and the forum
interaction of posts. While recent works mainly use CNN to model the text
information of posts, failing to effectively model posts whose length changes
frequently in an episode. To address the above problems, we propose a model
named URM4DMU(User Representation Model for Darknet Markets Users) which mainly
improves the post representation by augmenting convolutional operators and
self-attention with an adaptive gate mechanism. It performs much better when
combined with the temporal content and the forum interaction of posts. We
demonstrate the effectiveness of URM4DMU on four darknet markets. The average
improvements on MRR value and Recall@10 are 22.5% and 25.5% over the
state-of-the-art method respectively.
|
[
{
"version": "v1",
"created": "Sun, 19 Mar 2023 14:33:42 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Liu",
"Hongmeng",
""
],
[
"Zhao",
"Jiapeng",
""
],
[
"Huo",
"Yixuan",
""
],
[
"Wang",
"Yuyan",
""
],
[
"Liao",
"Chun",
""
],
[
"Shen",
"Liyan",
""
],
[
"Cui",
"Shiyao",
""
],
[
"Shi",
"Jinqiao",
""
]
] |
new_dataset
| 0.993363 |
2303.10699
|
Weizhe Lin
|
Weizhe Lin, Zhilin Wang, Bill Byrne
|
FVQA 2.0: Introducing Adversarial Samples into Fact-based Visual
Question Answering
|
Accepted to EACL 2023 Findings
| null | null | null |
cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The widely used Fact-based Visual Question Answering (FVQA) dataset contains
visually-grounded questions that require information retrieval using common
sense knowledge graphs to answer. It has been observed that the original
dataset is highly imbalanced and concentrated on a small portion of its
associated knowledge graph. We introduce FVQA 2.0 which contains adversarial
variants of test questions to address this imbalance. We show that systems
trained with the original FVQA train sets can be vulnerable to adversarial
samples and we demonstrate an augmentation scheme to reduce this vulnerability
without human annotations.
|
[
{
"version": "v1",
"created": "Sun, 19 Mar 2023 16:07:42 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Lin",
"Weizhe",
""
],
[
"Wang",
"Zhilin",
""
],
[
"Byrne",
"Bill",
""
]
] |
new_dataset
| 0.998999 |
2303.10708
|
Mandy Keck
|
Mandy Keck, Samuel Huron, Georgia Panagiotidou, Christina Stoiber,
Fateme Rajabiyazdi, Charles Perin, Jonathan C. Roberts, Benjamin Bach
|
EduVis: Workshop on Visualization Education, Literacy, and Activities
|
5 pages, no figures, accepted workshop for IEEE VIS 2023
| null | null | null |
cs.HC cs.CY cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
This workshop focuses on visualization education, literacy, and activities.
It aims to streamline previous efforts and initiatives of the visualization
community to provide a format for education and engagement practices in
visualization. It intends to bring together junior and senior scholars to share
research and experience and to discuss novel activities, teaching methods, and
research challenges. The workshop aims to serve as a platform for
interdisciplinary researchers within and beyond the visualization community
such as education, learning analytics, science communication, psychology, or
people from adjacent fields such as data science, AI, and HCI. It will include
presentations of research papers and practical reports, as well as hands-on
activities. In addition, the workshop will allow participants to discuss
challenges they face in data visualization education and sketch a research
agenda of visualization education, literacy, and activities.
|
[
{
"version": "v1",
"created": "Sun, 19 Mar 2023 16:35:43 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Keck",
"Mandy",
""
],
[
"Huron",
"Samuel",
""
],
[
"Panagiotidou",
"Georgia",
""
],
[
"Stoiber",
"Christina",
""
],
[
"Rajabiyazdi",
"Fateme",
""
],
[
"Perin",
"Charles",
""
],
[
"Roberts",
"Jonathan C.",
""
],
[
"Bach",
"Benjamin",
""
]
] |
new_dataset
| 0.994916 |
2303.10709
|
Junyuan Deng
|
Junyuan Deng, Xieyuanli Chen, Songpengcheng Xia, Zhen Sun, Guoqing
Liu, Wenxian Yu, Ling Pei
|
NeRF-LOAM: Neural Implicit Representation for Large-Scale Incremental
LiDAR Odometry and Mapping
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Simultaneously odometry and mapping using LiDAR data is an important task for
mobile systems to achieve full autonomy in large-scale environments. However,
most existing LiDAR-based methods prioritize tracking quality over
reconstruction quality. Although the recently developed neural radiance fields
(NeRF) have shown promising advances in implicit reconstruction for indoor
environments, the problem of simultaneous odometry and mapping for large-scale
scenarios using incremental LiDAR data remains unexplored. To bridge this gap,
in this paper, we propose a novel NeRF-based LiDAR odometry and mapping
approach, NeRF-LOAM, consisting of three modules neural odometry, neural
mapping, and mesh reconstruction. All these modules utilize our proposed neural
signed distance function, which separates LiDAR points into ground and
non-ground points to reduce Z-axis drift, optimizes odometry and voxel
embeddings concurrently, and in the end generates dense smooth mesh maps of the
environment. Moreover, this joint optimization allows our NeRF-LOAM to be
pre-trained free and exhibit strong generalization abilities when applied to
different environments. Extensive evaluations on three publicly available
datasets demonstrate that our approach achieves state-of-the-art odometry and
mapping performance, as well as a strong generalization in large-scale
environments utilizing LiDAR data. Furthermore, we perform multiple ablation
studies to validate the effectiveness of our network design. The implementation
of our approach will be made available at
https://github.com/JunyuanDeng/NeRF-LOAM.
|
[
{
"version": "v1",
"created": "Sun, 19 Mar 2023 16:40:36 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Deng",
"Junyuan",
""
],
[
"Chen",
"Xieyuanli",
""
],
[
"Xia",
"Songpengcheng",
""
],
[
"Sun",
"Zhen",
""
],
[
"Liu",
"Guoqing",
""
],
[
"Yu",
"Wenxian",
""
],
[
"Pei",
"Ling",
""
]
] |
new_dataset
| 0.955809 |
2303.10824
|
Minkyu Jeon
|
Minkyu Jeon, Hyeonjin Park, Hyunwoo J. Kim, Michael Morley, and
Hyunghoon Cho
|
k-SALSA: k-anonymous synthetic averaging of retinal images via local
style alignment
|
European Conference on Computer Vision (ECCV), 2022
| null |
10.1007/978-3-031-19803-8_39
| null |
cs.CV cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The application of modern machine learning to retinal image analyses offers
valuable insights into a broad range of human health conditions beyond
ophthalmic diseases. Additionally, data sharing is key to fully realizing the
potential of machine learning models by providing a rich and diverse collection
of training data. However, the personally-identifying nature of retinal images,
encompassing the unique vascular structure of each individual, often prevents
this data from being shared openly. While prior works have explored image
de-identification strategies based on synthetic averaging of images in other
domains (e.g. facial images), existing techniques face difficulty in preserving
both privacy and clinical utility in retinal images, as we demonstrate in our
work. We therefore introduce k-SALSA, a generative adversarial network
(GAN)-based framework for synthesizing retinal fundus images that summarize a
given private dataset while satisfying the privacy notion of k-anonymity.
k-SALSA brings together state-of-the-art techniques for training and inverting
GANs to achieve practical performance on retinal images. Furthermore, k-SALSA
leverages a new technique, called local style alignment, to generate a
synthetic average that maximizes the retention of fine-grain visual patterns in
the source images, thus improving the clinical utility of the generated images.
On two benchmark datasets of diabetic retinopathy (EyePACS and APTOS), we
demonstrate our improvement upon existing methods with respect to image
fidelity, classification performance, and mitigation of membership inference
attacks. Our work represents a step toward broader sharing of retinal images
for scientific collaboration. Code is available at
https://github.com/hcholab/k-salsa.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 01:47:04 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Jeon",
"Minkyu",
""
],
[
"Park",
"Hyeonjin",
""
],
[
"Kim",
"Hyunwoo J.",
""
],
[
"Morley",
"Michael",
""
],
[
"Cho",
"Hyunghoon",
""
]
] |
new_dataset
| 0.96968 |
2303.10833
|
Shudi Yang
|
Shudi Yang and Tonghui Zhang
|
Linear Codes From Two Weakly Regular Plateaued Functions with index
(p-1)/2
|
35 pages
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Linear codes are the most important family of codes in coding theory. Some
codes have only a few weights and are widely used in many areas, such as
authentication codes, secret sharing schemes, association schemes and strongly
regular graphs. By setting $ p\equiv 1 \pmod 4 $, we construct an infinite
family of linear codes using two weakly regular unbalanced (and balanced)
plateaued functions with index $ \frac{p-1}{2} $. Most of our constructed codes
have a few weights and are minimal. After analysing their punctured version, we
find that they are projective codes containing some optimal ones.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 02:37:39 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Yang",
"Shudi",
""
],
[
"Zhang",
"Tonghui",
""
]
] |
new_dataset
| 0.998376 |
2303.10975
|
Wang Zhe
|
Zhe Wang, Siqi Fan, Xiaoliang Huo, Tongda Xu, Yan Wang, Jingjing Liu,
Yilun Chen, Ya-Qin Zhang
|
VIMI: Vehicle-Infrastructure Multi-view Intermediate Fusion for
Camera-based 3D Object Detection
|
8 pages, 9 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In autonomous driving, Vehicle-Infrastructure Cooperative 3D Object Detection
(VIC3D) makes use of multi-view cameras from both vehicles and traffic
infrastructure, providing a global vantage point with rich semantic context of
road conditions beyond a single vehicle viewpoint. Two major challenges prevail
in VIC3D: 1) inherent calibration noise when fusing multi-view images, caused
by time asynchrony across cameras; 2) information loss when projecting 2D
features into 3D space. To address these issues, We propose a novel 3D object
detection framework, Vehicles-Infrastructure Multi-view Intermediate fusion
(VIMI). First, to fully exploit the holistic perspectives from both vehicles
and infrastructure, we propose a Multi-scale Cross Attention (MCA) module that
fuses infrastructure and vehicle features on selective multi-scales to correct
the calibration noise introduced by camera asynchrony. Then, we design a
Camera-aware Channel Masking (CCM) module that uses camera parameters as priors
to augment the fused features. We further introduce a Feature Compression (FC)
module with channel and spatial compression blocks to reduce the size of
transmitted features for enhanced efficiency. Experiments show that VIMI
achieves 15.61% overall AP_3D and 21.44% AP_BEV on the new VIC3D dataset,
DAIR-V2X-C, significantly outperforming state-of-the-art early fusion and late
fusion methods with comparable transmission cost.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 09:56:17 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Wang",
"Zhe",
""
],
[
"Fan",
"Siqi",
""
],
[
"Huo",
"Xiaoliang",
""
],
[
"Xu",
"Tongda",
""
],
[
"Wang",
"Yan",
""
],
[
"Liu",
"Jingjing",
""
],
[
"Chen",
"Yilun",
""
],
[
"Zhang",
"Ya-Qin",
""
]
] |
new_dataset
| 0.999049 |
2303.10988
|
Khaled Kassem
|
Khaled Kassem, Alia Saad
|
This Was (Not) Intended: How Intent Communication and Biometrics Can
Enhance Social Interactions With Robots
| null | null | null |
SARTMI/2023/8
|
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Socially Assistive Robots (SARs) are robots that are designed to replicate
the role of a caregiver, coach, or teacher, providing emotional, cognitive, and
social cues to support a specific group. SARs are becoming increasingly
prevalent, especially in elderly care. Effective communication, both explicit
and implicit, is a critical aspect of human-robot interaction involving SARs.
Intent communication is necessary for SARs to engage in effective communication
with humans. Biometrics can provide crucial information about a person's
identity or emotions. By linking these biometric signals to the communication
of intent, SARs can gain a profound understanding of their users and tailor
their interactions accordingly. The development of reliable and robust
biometric sensing and analysis systems is critical to the success of SARs. In
this work, we focus on four different aspects to evaluate the communication of
intent involving SARs, existing works, and our outlook on future works and
applications.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 10:17:10 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Kassem",
"Khaled",
""
],
[
"Saad",
"Alia",
""
]
] |
new_dataset
| 0.963304 |
2303.11005
|
Li Yi
|
Li Yi
|
Controllable Ancient Chinese Lyrics Generation Based on Phrase Prototype
Retrieving
| null | null | null | null |
cs.CL cs.AI cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Generating lyrics and poems is one of the essential downstream tasks in the
Natural Language Processing (NLP) field. Current methods have performed well in
some lyrics generation scenarios but need further improvements in tasks
requiring fine control. We propose a novel method for generating ancient
Chinese lyrics (Song Ci), a type of ancient lyrics that involves precise
control of song structure. The system is equipped with a phrase retriever and a
phrase connector. Based on an input prompt, the phrase retriever picks phrases
from a database to construct a phrase pool. The phrase connector then selects a
series of phrases from the phrase pool that minimizes a multi-term loss
function that considers rhyme, song structure, and fluency. Experimental
results show that our method can generate high-quality ancient Chinese lyrics
while performing well on topic and song structure control. We also expect our
approach to be generalized to other lyrics-generating tasks.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 10:33:06 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Yi",
"Li",
""
]
] |
new_dataset
| 0.996788 |
2303.11032
|
Xiang Li
|
Zhengliang Liu, Xiaowei Yu, Lu Zhang, Zihao Wu, Chao Cao, Haixing Dai,
Lin Zhao, Wei Liu, Dinggang Shen, Quanzheng Li, Tianming Liu, Dajiang Zhu,
Xiang Li
|
DeID-GPT: Zero-shot Medical Text De-Identification by GPT-4
| null | null | null | null |
cs.CL cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
The digitization of healthcare has facilitated the sharing and re-using of
medical data but has also raised concerns about confidentiality and privacy.
HIPAA (Health Insurance Portability and Accountability Act) mandates removing
re-identifying information before the dissemination of medical records. Thus,
effective and efficient solutions for de-identifying medical data, especially
those in free-text forms, are highly needed. While various computer-assisted
de-identification methods, including both rule-based and learning-based, have
been developed and used in prior practice, such solutions still lack
generalizability or need to be fine-tuned according to different scenarios,
significantly imposing restrictions in wider use. The advancement of large
language models (LLM), such as ChatGPT and GPT-4, have shown great potential in
processing text data in the medical domain with zero-shot in-context learning,
especially in the task of privacy protection, as these models can identify
confidential information by their powerful named entity recognition (NER)
capability. In this work, we developed a novel GPT4-enabled de-identification
framework ("DeID-GPT") to automatically identify and remove the identifying
information. Compared to existing commonly used medical text data
de-identification methods, our developed DeID-GPT showed the highest accuracy
and remarkable reliability in masking private information from the unstructured
medical text while preserving the original structure and meaning of the text.
This study is one of the earliest to utilize ChatGPT and GPT-4 for medical text
data processing and de-identification, which provides insights for further
research and solution development on the use of LLMs such as ChatGPT/GPT-4 in
healthcare. Codes and benchmarking data information are available at
https://github.com/yhydhx/ChatGPT-API.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 11:34:37 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Liu",
"Zhengliang",
""
],
[
"Yu",
"Xiaowei",
""
],
[
"Zhang",
"Lu",
""
],
[
"Wu",
"Zihao",
""
],
[
"Cao",
"Chao",
""
],
[
"Dai",
"Haixing",
""
],
[
"Zhao",
"Lin",
""
],
[
"Liu",
"Wei",
""
],
[
"Shen",
"Dinggang",
""
],
[
"Li",
"Quanzheng",
""
],
[
"Liu",
"Tianming",
""
],
[
"Zhu",
"Dajiang",
""
],
[
"Li",
"Xiang",
""
]
] |
new_dataset
| 0.994665 |
2303.11034
|
Haohao Sun
|
Haohao Sun, Yilong Zhang, Peng Chen, Haixia Wang, Ronghua Liang
|
Internal Structure Attention Network for Fingerprint Presentation Attack
Detection from Optical Coherence Tomography
|
12 pages, 14 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a non-invasive optical imaging technique, optical coherence tomography
(OCT) has proven promising for automatic fingerprint recognition system (AFRS)
applications. Diverse approaches have been proposed for OCT-based fingerprint
presentation attack detection (PAD). However, considering the complexity and
variety of PA samples, it is extremely challenging to increase the
generalization ability with the limited PA dataset. To solve the challenge,
this paper presents a novel supervised learning-based PAD method, denoted as
ISAPAD, which applies prior knowledge to guide network training and enhance the
generalization ability. The proposed dual-branch architecture can not only
learns global features from the OCT image, but also concentrate on layered
structure feature which comes from the internal structure attention module
(ISAM). The simple yet effective ISAM enables the proposed network to obtain
layered segmentation features belonging only to Bonafide from noisy OCT volume
data directly. Combined with effective training strategies and PAD score
generation rules, ISAPAD obtains optimal PAD performance in limited training
data. Domain generalization experiments and visualization analysis validate the
effectiveness of the proposed method for OCT PAD.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 11:36:09 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Sun",
"Haohao",
""
],
[
"Zhang",
"Yilong",
""
],
[
"Chen",
"Peng",
""
],
[
"Wang",
"Haixia",
""
],
[
"Liang",
"Ronghua",
""
]
] |
new_dataset
| 0.992704 |
2303.11049
|
Michael Filler
|
Michael Filler and Benjamin Reinhardt
|
Nanomodular Electronics
|
55 pages, 15 figures
| null | null | null |
cs.CY cs.AR cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It may be possible to reinvent how microelectronics are made using a two step
process: (1) Synthesizing modular, nanometer-scale components -- transistors,
sensors, and other devices -- and suspending them in a liquid "ink" for storage
or transport; (2) Using a 3D-printer-like machine to create circuits by placing
and wiring the components. Developments in nanotechnology, colloidal chemistry,
precision additive manufacturing, and computer vision suggest this new process
is possible. Herein, we describe a roadmap to these nanomodular electronics,
which could enable a "fab in a box" and make fabricating microelectronics as
straightforward as printing this document.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 12:02:34 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Filler",
"Michael",
""
],
[
"Reinhardt",
"Benjamin",
""
]
] |
new_dataset
| 0.999725 |
2303.11071
|
Stefan Milius
|
Ji\v{r}\'i Ad\'amek and Stefan Milius and Lawrence S. Moss
|
On Kripke, Vietoris and Hausdorff Polynomial Functors
| null | null | null | null |
cs.LO math.CT
|
http://creativecommons.org/licenses/by/4.0/
|
The Vietoris space of compact subsets of a given Hausdorff space yields an
endofunctor $\mathscr V$ on the category of Hausdorff spaces. Vietoris
polynomial endofunctors on that category are built from $\mathscr V$, the
identity and constant functors by forming products, coproducts and
compositions. These functors are known to have terminal coalgebras and we
deduce that they also have initial algebras. We present an analogous class of
endofunctors on the category of extended metric spaces, using in lieu of
$\mathscr V$ the Hausdorff functor $\mathcal H$. We prove that the ensuing
Hausdorff polynomial functors have terminal coalgebras and initial algebras.
Whereas the canonical constructions of terminal coalgebras for Vietoris
polynomial functors takes $\omega$ steps, one needs $\omega + \omega$ steps in
general for Hausdorff ones. We also give a new proof that the closed set
functor on metric spaces has no fixed points.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 12:56:41 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Adámek",
"Jiří",
""
],
[
"Milius",
"Stefan",
""
],
[
"Moss",
"Lawrence S.",
""
]
] |
new_dataset
| 0.998105 |
2303.11076
|
Kamil Faber
|
Kamil Faber, Dominik Zurek, Marcin Pietron, Nathalie Japkowicz,
Antonio Vergari, Roberto Corizzo
|
From MNIST to ImageNet and Back: Benchmarking Continual Curriculum
Learning
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Continual learning (CL) is one of the most promising trends in recent machine
learning research. Its goal is to go beyond classical assumptions in machine
learning and develop models and learning strategies that present high
robustness in dynamic environments. The landscape of CL research is fragmented
into several learning evaluation protocols, comprising different learning
tasks, datasets, and evaluation metrics. Additionally, the benchmarks adopted
so far are still distant from the complexity of real-world scenarios, and are
usually tailored to highlight capabilities specific to certain strategies. In
such a landscape, it is hard to objectively assess strategies. In this work, we
fill this gap for CL on image data by introducing two novel CL benchmarks that
involve multiple heterogeneous tasks from six image datasets, with varying
levels of complexity and quality. Our aim is to fairly evaluate current
state-of-the-art CL strategies on a common ground that is closer to complex
real-world scenarios. We additionally structure our benchmarks so that tasks
are presented in increasing and decreasing order of complexity -- according to
a curriculum -- in order to evaluate if current CL models are able to exploit
structure across tasks. We devote particular emphasis to providing the CL
community with a rigorous and reproducible evaluation protocol for measuring
the ability of a model to generalize and not to forget while learning.
Furthermore, we provide an extensive experimental evaluation showing that
popular CL strategies, when challenged with our benchmarks, yield sub-par
performance, high levels of forgetting, and present a limited ability to
effectively leverage curriculum task ordering. We believe that these results
highlight the need for rigorous comparisons in future CL works as well as pave
the way to design new CL strategies that are able to deal with more complex
scenarios.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 18:11:19 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Faber",
"Kamil",
""
],
[
"Zurek",
"Dominik",
""
],
[
"Pietron",
"Marcin",
""
],
[
"Japkowicz",
"Nathalie",
""
],
[
"Vergari",
"Antonio",
""
],
[
"Corizzo",
"Roberto",
""
]
] |
new_dataset
| 0.996071 |
2303.11137
|
Yu Cao
|
Yu Cao, Xiangqiao Meng, P.Y. Mok, Xueting Liu, Tong-Yee Lee, Ping Li
|
AnimeDiffusion: Anime Face Line Drawing Colorization via Diffusion
Models
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is a time-consuming and tedious work for manually colorizing anime line
drawing images, which is an essential stage in cartoon animation creation
pipeline. Reference-based line drawing colorization is a challenging task that
relies on the precise cross-domain long-range dependency modelling between the
line drawing and reference image. Existing learning methods still utilize
generative adversarial networks (GANs) as one key module of their model
architecture. In this paper, we propose a novel method called AnimeDiffusion
using diffusion models that performs anime face line drawing colorization
automatically. To the best of our knowledge, this is the first diffusion model
tailored for anime content creation. In order to solve the huge training
consumption problem of diffusion models, we design a hybrid training strategy,
first pre-training a diffusion model with classifier-free guidance and then
fine-tuning it with image reconstruction guidance. We find that with a few
iterations of fine-tuning, the model shows wonderful colorization performance,
as illustrated in Fig. 1. For training AnimeDiffusion, we conduct an anime face
line drawing colorization benchmark dataset, which contains 31696 training data
and 579 testing data. We hope this dataset can fill the gap of no available
high resolution anime face dataset for colorization method evaluation. Through
multiple quantitative metrics evaluated on our dataset and a user study, we
demonstrate AnimeDiffusion outperforms state-of-the-art GANs-based models for
anime face line drawing colorization. We also collaborate with professional
artists to test and apply our AnimeDiffusion for their creation work. We
release our code on https://github.com/xq-meng/AnimeDiffusion.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 14:15:23 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Cao",
"Yu",
""
],
[
"Meng",
"Xiangqiao",
""
],
[
"Mok",
"P. Y.",
""
],
[
"Liu",
"Xueting",
""
],
[
"Lee",
"Tong-Yee",
""
],
[
"Li",
"Ping",
""
]
] |
new_dataset
| 0.960257 |
2303.11143
|
Gianluca Capozzi
|
Gianluca Capozzi, Daniele Cono D'Elia, Giuseppe Antonio Di Luna,
Leonardo Querzoni
|
Adversarial Attacks against Binary Similarity Systems
| null | null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, binary analysis gained traction as a fundamental approach to
inspect software and guarantee its security. Due to the exponential increase of
devices running software, much research is now moving towards new autonomous
solutions based on deep learning models, as they have been showing
state-of-the-art performances in solving binary analysis problems. One of the
hot topics in this context is binary similarity, which consists in determining
if two functions in assembly code are compiled from the same source code.
However, it is unclear how deep learning models for binary similarity behave in
an adversarial context. In this paper, we study the resilience of binary
similarity models against adversarial examples, showing that they are
susceptible to both targeted and untargeted attacks (w.r.t. similarity goals)
performed by black-box and white-box attackers. In more detail, we extensively
test three current state-of-the-art solutions for binary similarity against two
black-box greedy attacks, including a new technique that we call Spatial
Greedy, and one white-box attack in which we repurpose a gradient-guided
strategy used in attacks to image classifiers.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 14:22:04 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Capozzi",
"Gianluca",
""
],
[
"D'Elia",
"Daniele Cono",
""
],
[
"Di Luna",
"Giuseppe Antonio",
""
],
[
"Querzoni",
"Leonardo",
""
]
] |
new_dataset
| 0.987318 |
2303.11158
|
Boniphace Kutela
|
Boniphace Kutela, Shoujia Li, Subasish Das, and Jinli Liu
|
ChatGPT as the Transportation Equity Information Source for Scientific
Writing
| null | null | null | null |
cs.IR cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Transportation equity is an interdisciplinary agenda that requires both
transportation and social inputs. Traditionally, transportation equity
information are sources from public libraries, conferences, televisions, social
media, among other. Artificial intelligence (AI) tools including advanced
language models such as ChatGPT are becoming favorite information sources.
However, their credibility has not been well explored. This study explored the
content and usefulness of ChatGPT-generated information related to
transportation equity. It utilized 152 papers retrieved through the Web of
Science (WoS) repository. The prompt was crafted for ChatGPT to provide an
abstract given the title of the paper. The ChatGPT-based abstracts were then
compared to human-written abstracts using statistical tools and unsupervised
text mining. The results indicate that a weak similarity between ChatGPT and
human-written abstracts. On average, the human-written abstracts and ChatGPT
generated abstracts were about 58% similar, with a maximum and minimum of 97%
and 1.4%, respectively. The keywords from the abstracts of papers with over the
mean similarity score were more likely to be similar whereas those from below
the average score were less likely to be similar. Themes with high similarity
scores include access, public transit, and policy, among others. Further, clear
differences in the key pattern of clusters for high and low similarity score
abstracts was observed. Contrarily, the findings from collocated keywords were
inconclusive. The study findings suggest that ChatGPT has the potential to be a
source of transportation equity information. However, currently, a great amount
of attention is needed before a user can utilize materials from ChatGPT
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 16:21:54 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Kutela",
"Boniphace",
""
],
[
"Li",
"Shoujia",
""
],
[
"Das",
"Subasish",
""
],
[
"Liu",
"Jinli",
""
]
] |
new_dataset
| 0.990513 |
2303.11171
|
Carlos Lassance
|
Carlos Lassance and St\'ephane Clinchant
|
Naver Labs Europe (SPLADE) @ TREC NeuCLIR 2022
|
Notebook detailing our participation and analysis on the TREC NeuCLIR
2022
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes our participation in the 2022 TREC NeuCLIR challenge. We
submitted runs to two out of the three languages (Farsi and Russian), with a
focus on first-stage rankers and comparing mono-lingual strategies to Adhoc
ones. For monolingual runs, we start from pretraining models on the target
language using MLM+FLOPS and then finetuning using the MSMARCO translated to
the language either with ColBERT or SPLADE as the retrieval model. While for
the Adhoc task, we test both query translation (to the target language) and
back-translation of the documents (to English). Initial result analysis shows
that the monolingual strategy is strong, but that for the moment Adhoc achieved
the best results, with back-translating documents being better than translating
queries.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 16:56:42 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Lassance",
"Carlos",
""
],
[
"Clinchant",
"Stéphane",
""
]
] |
new_dataset
| 0.981115 |
2303.11190
|
Joaquim Borges
|
Joaquim Borges, Josep Rif\`a, Victor Zinoviev
|
On new infinite families of completely regular and completely transitive
codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In two previous papers we constructed new families of completely regular
codes by concatenation methods. Here we determine cases in which the new codes
are completely transitive. For these cases we also find the automorphism groups
of such codes. For the remaining cases, we show that the codes are not
completely transitive assuming an upper bound on the order of the monomial
automorphism groups, according to computational results.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 15:21:41 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Borges",
"Joaquim",
""
],
[
"Rifà",
"Josep",
""
],
[
"Zinoviev",
"Victor",
""
]
] |
new_dataset
| 0.997432 |
2303.11192
|
Vil\'em Zouhar
|
Vil\'em Zouhar, Sunit Bhattacharya, Ond\v{r}ej Bojar
|
Multimodal Shannon Game with Images
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The Shannon game has long been used as a thought experiment in linguistics
and NLP, asking participants to guess the next letter in a sentence based on
its preceding context. We extend the game by introducing an optional extra
modality in the form of image information. To investigate the impact of
multimodal information in this game, we use human participants and a language
model (LM, GPT-2). We show that the addition of image information improves both
self-reported confidence and accuracy for both humans and LM. Certain word
classes, such as nouns and determiners, benefit more from the additional
modality information. The priming effect in both humans and the LM becomes more
apparent as the context size (extra modality information + sentence context)
increases. These findings highlight the potential of multimodal information in
improving language understanding and modeling.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 15:22:11 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Zouhar",
"Vilém",
""
],
[
"Bhattacharya",
"Sunit",
""
],
[
"Bojar",
"Ondřej",
""
]
] |
new_dataset
| 0.998587 |
2303.11220
|
Alexander Heinrich
|
Alexander Heinrich, S\"oren Krollmann, Florentin Putz, Matthias
Hollick
|
Smartphones with UWB: Evaluating the Accuracy and Reliability of UWB
Ranging
|
16 pages, 14 figures
| null | null | null |
cs.CR cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
More and more consumer devices implement the IEEE Ultra-Wide Band (UWB)
standard to perform distance measurements for sensitive tasks such as keyless
entry and startup of modern cars, to find lost items using coin-sized trackers,
and for smart payments. While UWB promises the ability to perform
time-of-flight centimeter-accurate distance measurements between two devices,
the accuracy and reliability of the implementation in up-to-date consumer
devices have not been evaluated so far. In this paper, we present the first
evaluation of UWB smartphones from Apple, Google, and Samsung, focusing on
accuracy and reliability in passive keyless entry and smart home automation
scenarios. To perform the measurements for our analysis, we build a
custom-designed testbed based on a Gimbal-based platform for Wireless
Evaluation (GWEn), which allows us to create reproducible measurements. All our
results, including all measurement data and a manual to reconstruct a GWEn are
published online. We find that the evaluated devices can measure the distance
with an error of less than 20cm, but fail in producing reliable measurements in
all scenarios. Finally, we give recommendations on how to handle measurement
results when implementing a passive keyless entry system.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 15:51:54 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Heinrich",
"Alexander",
""
],
[
"Krollmann",
"Sören",
""
],
[
"Putz",
"Florentin",
""
],
[
"Hollick",
"Matthias",
""
]
] |
new_dataset
| 0.984085 |
2303.11223
|
Charles Tang
|
Charles Tang
|
A semi-trailer truck right-hook turn blind spot alert system for
detecting vulnerable road users using transfer learning
|
9 pages, 13 figures
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Cycling is an increasingly popular method of transportation for
sustainability and health benefits. However, cyclists face growing risks,
especially when encountering semi-trailer trucks. This study aims to reduce the
number of truck-cyclist collisions, which are often caused by semi-trailer
trucks making right-hook turns and poor driver attention to blind spots. To
achieve this, we designed a visual-based blind spot warning system that can
detect cyclists for semi-trailer truck drivers using deep learning. First,
several greater than 90% mAP cyclist detection models, such as the EfficientDet
Lite 1 and SSD MobileNetV2, were created using state-of-the-art lightweight
deep learning architectures fine-tuned on a newly proposed cyclist image
dataset composed of a diverse set of over 20,000 images. Next, the object
detection model was deployed onto a Google Coral Dev Board mini-computer with a
camera module and analyzed for speed, reaching inference times as low as 15
milliseconds. Lastly, the end-to-end blind spot cyclist detection device was
tested in real-time to model traffic scenarios and analyzed further for
performance and feasibility. We concluded that this portable blind spot alert
device can accurately and quickly detect cyclists and have the potential to
significantly improve cyclist safety. Future studies could determine the
feasibility of the proposed device in the trucking industry and improvements to
cyclist safety over time.
|
[
{
"version": "v1",
"created": "Mon, 16 Jan 2023 13:54:13 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Tang",
"Charles",
""
]
] |
new_dataset
| 0.999265 |
2303.11291
|
Veljko Pejovic
|
Matev\v{z} Fabjan\v{c}i\v{c}, Octavian Machidon, Hashim Sharif, Yifan
Zhao, Sa\v{s}a Misailovi\'c, Veljko Pejovi\'c
|
Mobiprox: Supporting Dynamic Approximate Computing on Mobiles
|
26 pages, 9 figures
| null | null | null |
cs.LG cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Runtime-tunable context-dependent network compression would make mobile deep
learning adaptable to often varying resource availability, input "difficulty",
or user needs. The existing compression techniques significantly reduce the
memory, processing, and energy tax of deep learning, yet, the resulting models
tend to be permanently impaired, sacrificing the inference power for reduced
resource usage. The existing tunable compression approaches, on the other hand,
require expensive re-training, seldom provide mobile-ready implementations, and
do not support arbitrary strategies for adapting the compression.
In this paper we present Mobiprox, a framework enabling flexible-accuracy
on-device deep learning. Mobiprox implements tunable approximations of tensor
operations and enables runtime adaptation of individual network layers. A
profiler and a tuner included with Mobiprox identify the most promising neural
network approximation configurations leading to the desired inference quality
with the minimal use of resources. Furthermore, we develop control strategies
that depending on contextual factors, such as the input data difficulty,
dynamically adjust the approximation level of a model. We implement Mobiprox in
Android OS and through experiments in diverse mobile domains, including human
activity recognition and spoken keyword detection, demonstrate that it can save
up to 15% system-wide energy with a minimal impact on the inference accuracy.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 21:40:23 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Fabjančič",
"Matevž",
""
],
[
"Machidon",
"Octavian",
""
],
[
"Sharif",
"Hashim",
""
],
[
"Zhao",
"Yifan",
""
],
[
"Misailović",
"Saša",
""
],
[
"Pejović",
"Veljko",
""
]
] |
new_dataset
| 0.991714 |
2303.11320
|
Xi Chen
|
Xi Chen, Yau Shing Jonathan Cheung, Ser-Nam Lim, Hengshuang Zhao
|
ScribbleSeg: Scribble-based Interactive Image Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Interactive segmentation enables users to extract masks by providing simple
annotations to indicate the target, such as boxes, clicks, or scribbles. Among
these interaction formats, scribbles are the most flexible as they can be of
arbitrary shapes and sizes. This enables scribbles to provide more indications
of the target object. However, previous works mainly focus on click-based
configuration, and the scribble-based setting is rarely explored. In this work,
we attempt to formulate a standard protocol for scribble-based interactive
segmentation. Basically, we design diversified strategies to simulate scribbles
for training, propose a deterministic scribble generator for evaluation, and
construct a challenging benchmark. Besides, we build a strong framework
ScribbleSeg, consisting of a Prototype Adaption Module(PAM) and a Corrective
Refine Module (CRM), for the task. Extensive experiments show that ScribbleSeg
performs notably better than previous click-based methods. We hope this could
serve as a more powerful and general solution for interactive segmentation. Our
code will be made available.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 17:57:03 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Chen",
"Xi",
""
],
[
"Cheung",
"Yau Shing Jonathan",
""
],
[
"Lim",
"Ser-Nam",
""
],
[
"Zhao",
"Hengshuang",
""
]
] |
new_dataset
| 0.999499 |
2303.11327
|
Chuang Gan
|
Yining Hong, Chunru Lin, Yilun Du, Zhenfang Chen, Joshua B. Tenenbaum,
Chuang Gan
|
3D Concept Learning and Reasoning from Multi-View Images
|
CVPR 2023. Project page: https://vis-www.cs.umass.edu/3d-clr/
| null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Humans are able to accurately reason in 3D by gathering multi-view
observations of the surrounding world. Inspired by this insight, we introduce a
new large-scale benchmark for 3D multi-view visual question answering
(3DMV-VQA). This dataset is collected by an embodied agent actively moving and
capturing RGB images in an environment using the Habitat simulator. In total,
it consists of approximately 5k scenes, 600k images, paired with 50k questions.
We evaluate various state-of-the-art models for visual reasoning on our
benchmark and find that they all perform poorly. We suggest that a principled
approach for 3D reasoning from multi-view images should be to infer a compact
3D representation of the world from the multi-view images, which is further
grounded on open-vocabulary semantic concepts, and then to execute reasoning on
these 3D representations. As the first step towards this approach, we propose a
novel 3D concept learning and reasoning (3D-CLR) framework that seamlessly
combines these components via neural fields, 2D pre-trained vision-language
models, and neural reasoning operators. Experimental results suggest that our
framework outperforms baseline models by a large margin, but the challenge
remains largely unsolved. We further perform an in-depth analysis of the
challenges and highlight potential future directions.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 17:59:49 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Hong",
"Yining",
""
],
[
"Lin",
"Chunru",
""
],
[
"Du",
"Yilun",
""
],
[
"Chen",
"Zhenfang",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"Gan",
"Chuang",
""
]
] |
new_dataset
| 0.999729 |
2303.11328
|
Ruoshi Liu
|
Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey
Zakharov, Carl Vondrick
|
Zero-1-to-3: Zero-shot One Image to 3D Object
|
Website: https://zero123.cs.columbia.edu/
| null | null | null |
cs.CV cs.GR cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce Zero-1-to-3, a framework for changing the camera viewpoint of an
object given just a single RGB image. To perform novel view synthesis in this
under-constrained setting, we capitalize on the geometric priors that
large-scale diffusion models learn about natural images. Our conditional
diffusion model uses a synthetic dataset to learn controls of the relative
camera viewpoint, which allow new images to be generated of the same object
under a specified camera transformation. Even though it is trained on a
synthetic dataset, our model retains a strong zero-shot generalization ability
to out-of-distribution datasets as well as in-the-wild images, including
impressionist paintings. Our viewpoint-conditioned diffusion approach can
further be used for the task of 3D reconstruction from a single image.
Qualitative and quantitative experiments show that our method significantly
outperforms state-of-the-art single-view 3D reconstruction and novel view
synthesis models by leveraging Internet-scale pre-training.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 17:59:50 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Liu",
"Ruoshi",
""
],
[
"Wu",
"Rundi",
""
],
[
"Van Hoorick",
"Basile",
""
],
[
"Tokmakov",
"Pavel",
""
],
[
"Zakharov",
"Sergey",
""
],
[
"Vondrick",
"Carl",
""
]
] |
new_dataset
| 0.996808 |
1710.03219
|
Noah Fleming
|
Paul Beame, Noah Fleming, Russell Impagliazzo, Antonina Kolokolova,
Denis Pankratov, Toniann Pitassi, Robert Robere
|
Stabbing Planes
| null | null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop a new semi-algebraic proof system called Stabbing Planes which
formalizes modern branch-and-cut algorithms for integer programming and is in
the style of DPLL-based modern SAT solvers. As with DPLL there is only a single
rule: the current polytope can be subdivided by branching on an inequality and
its "integer negation." That is, we can (nondeterministically choose) a
hyperplane $ax \geq b$ with integer coefficients, which partitions the polytope
into three pieces: the points in the polytope satisfying $ax \geq b$, the
points satisfying $ax \leq b-1$, and the middle slab $b - 1 < ax < b$. Since
the middle slab contains no integer points it can be safely discarded, and the
algorithm proceeds recursively on the other two branches. Each path terminates
when the current polytope is empty, which is polynomial-time checkable. Among
our results, we show that Stabbing Planes can efficiently simulate the Cutting
Planes proof system, and is equivalent to a tree-like variant of the RCP system
of [Krajicek98]. As well, we show that it possesses short proofs of the
canonical family of systems of $\mathbb{F}_2$-linear equations known as the
Tseitin formulas. Finally, we prove linear lower bounds on the rank of Stabbing
Planes refutations by adapting lower bounds in communication complexity and use
these bounds in order to show that Stabbing Planes proofs cannot be balanced.
In doing so, we show that real communication protocols cannot be balanced and
establish the first lower bound on the real communication complexity of the set
disjointness function.
|
[
{
"version": "v1",
"created": "Mon, 9 Oct 2017 17:56:24 GMT"
},
{
"version": "v2",
"created": "Wed, 18 May 2022 16:55:28 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Mar 2023 15:40:20 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Beame",
"Paul",
""
],
[
"Fleming",
"Noah",
""
],
[
"Impagliazzo",
"Russell",
""
],
[
"Kolokolova",
"Antonina",
""
],
[
"Pankratov",
"Denis",
""
],
[
"Pitassi",
"Toniann",
""
],
[
"Robere",
"Robert",
""
]
] |
new_dataset
| 0.995466 |
2201.08978
|
Moein Khazraee
|
Moein Khazraee, Alex Forencich, George Papen, Alex C. Snoeren and
Aaron Schulman
|
Rosebud: Making FPGA-Accelerated Middlebox Development More Pleasant
|
20 pages. Final version, to appear in ASPLOS23
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce an approach to designing FPGA-accelerated middleboxes that
simplifies development, debugging, and performance tuning by decoupling the
tasks of hardware-accelerator implementation and software-application
programming. Rosebud is a framework that links hardware accelerators to a
high-performance packet processing pipeline through a standardized
hardware/software interface. This separation of concerns allows hardware
developers to focus on optimizing custom accelerators while freeing software
programmers to reuse, configure, and debug accelerators in a fashion akin to
software libraries. We show the benefits of the Rosebud framework by building a
firewall based on a large blacklist and porting the Pigasus IDS
pattern-matching accelerator in less than a month. Our experiments demonstrate
that Rosebud delivers high performance, serving ~200 Gbps of traffic while
adding only 0.7-7 microseconds of latency.
|
[
{
"version": "v1",
"created": "Sat, 22 Jan 2022 07:10:13 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Feb 2023 07:48:17 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Mar 2023 01:09:08 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Khazraee",
"Moein",
""
],
[
"Forencich",
"Alex",
""
],
[
"Papen",
"George",
""
],
[
"Snoeren",
"Alex C.",
""
],
[
"Schulman",
"Aaron",
""
]
] |
new_dataset
| 0.997475 |
2203.17179
|
Diana Costa
|
Diana Costa
|
4DL: a four-valued Dynamic logic and its proof-theory
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transition systems are often used to describe the behaviour of software
systems. If viewed as a graph then, at their most basic level, vertices
correspond to the states of a program and each edge represents a transition
between states via the (atomic) action labelled. In this setting, systems are
thought to be consistent so that at each state formulas are evaluated as either
True or False.
On the other hand, when a structure of this sort - for example a map where
states represent locations, some local properties are known and labelled
transitions represent information available about different routes - is built
resorting to multiple sources of information, it is common to find inconsistent
or incomplete information regarding what holds at each state, both at the level
of propositional variables and transitions.
This paper aims at bringing together Belnap's four values, Dynamic Logic and
hybrid machinery such as nominals and the satisfaction operator, so that
reasoning is still possible in face of contradicting evidence. Proof-theory for
this new logic is explored by means of a terminating, sound and complete
tableaux system.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 16:51:40 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 20:48:38 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Costa",
"Diana",
""
]
] |
new_dataset
| 0.985056 |
2209.00776
|
Chuanhang Yan
|
Chuanhang Yan, Yu Sun, Qian Bao, Jinhui Pang, Wu Liu, Tao Mei
|
WOC: A Handy Webcam-based 3D Online Chatroom
| null | null |
10.1145/3503161.3547743
| null |
cs.HC cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We develop WOC, a webcam-based 3D virtual online chatroom for multi-person
interaction, which captures the 3D motion of users and drives their individual
3D virtual avatars in real-time. Compared to the existing wearable
equipment-based solution, WOC offers convenient and low-cost 3D motion capture
with a single camera. To promote the immersive chat experience, WOC provides
high-fidelity virtual avatar manipulation, which also supports the user-defined
characters. With the distributed data flow service, the system delivers highly
synchronized motion and voice for all users. Deployed on the website and no
installation required, users can freely experience the virtual online chat at
https://yanch.cloud.
|
[
{
"version": "v1",
"created": "Fri, 2 Sep 2022 01:34:14 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Mar 2023 14:33:59 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Yan",
"Chuanhang",
""
],
[
"Sun",
"Yu",
""
],
[
"Bao",
"Qian",
""
],
[
"Pang",
"Jinhui",
""
],
[
"Liu",
"Wu",
""
],
[
"Mei",
"Tao",
""
]
] |
new_dataset
| 0.999726 |
2209.01496
|
Jingyuan Zhang
|
Jingyuan Zhang, Ao Wang, Xiaolong Ma, Benjamin Carver, Nicholas John
Newman, Ali Anwar, Lukas Rupprecht, Dimitrios Skourtis, Vasily Tarasov, Feng
Yan, Yue Cheng
|
InfiniStore: Elastic Serverless Cloud Storage
|
An extensive report of the paper accepted by VLDB 2023
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cloud object storage such as AWS S3 is cost-effective and highly elastic but
relatively slow, while high-performance cloud storage such as AWS ElastiCache
is expensive and provides limited elasticity. We present a new cloud storage
service called ServerlessMemory, which stores data using the memory of
serverless functions. ServerlessMemory employs a sliding-window-based memory
management strategy inspired by the garbage collection mechanisms used in the
programming language to effectively segregate hot/cold data and provides
fine-grained elasticity, good performance, and a pay-per-access cost model with
extremely low cost.
We then design and implement InfiniStore, a persistent and elastic cloud
storage system, which seamlessly couples the function-based ServerlessMemory
layer with a persistent, inexpensive cloud object store layer. InfiniStore
enables durability despite function failures using a fast parallel recovery
scheme built on the autoscaling functionality of a FaaS (Function-as-a-Service)
platform. We evaluate InfiniStore extensively using both microbenchmarking and
two real-world applications. Results show that InfiniStore has more performance
benefits for objects larger than 10 MB compared to AWS ElastiCache and Anna,
and InfiniStore achieves 26.25% and 97.24% tenant-side cost reduction compared
to InfiniCache and ElastiCache, respectively.
|
[
{
"version": "v1",
"created": "Sat, 3 Sep 2022 20:35:23 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Feb 2023 19:01:17 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Mar 2023 20:08:53 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Zhang",
"Jingyuan",
""
],
[
"Wang",
"Ao",
""
],
[
"Ma",
"Xiaolong",
""
],
[
"Carver",
"Benjamin",
""
],
[
"Newman",
"Nicholas John",
""
],
[
"Anwar",
"Ali",
""
],
[
"Rupprecht",
"Lukas",
""
],
[
"Skourtis",
"Dimitrios",
""
],
[
"Tarasov",
"Vasily",
""
],
[
"Yan",
"Feng",
""
],
[
"Cheng",
"Yue",
""
]
] |
new_dataset
| 0.999062 |
2210.05194
|
Ziling Heng
|
Ziling Heng, Xinran Wang
|
New infinite families of near MDS codes holding $t$-designs
|
34 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In ``Infinite families of near MDS codes holding $t$-designs, IEEE Trans.
Inform. Theory, 2020, 66(9), pp. 5419-5428'', Ding and Tang made a breakthrough
in constructing the first two infinite families of NMDS codes holding
$2$-designs or $3$-designs. Up to now, there are only a few known infinite
families of NMDS codes holding $t$-designs in the literature. The objective of
this paper is to construct new infinite families of NMDS codes holding
$t$-designs. We determine the weight enumerators of the NMDS codes and prove
that the NMDS codes hold $2$-designs or $3$-designs. Compared with known
$t$-designs from NMDS codes, ours have different parameters. Besides, several
infinite families of optimal locally recoverable codes are also derived via the
NMDS codes.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 06:57:17 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Mar 2023 00:58:57 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Heng",
"Ziling",
""
],
[
"Wang",
"Xinran",
""
]
] |
new_dataset
| 0.998881 |
2210.06048
|
Alexander Dittrich
|
Alexander Dittrich, Jan Schneider, Simon Guist, Nico G\"urtler, Heiko
Ott, Thomas Steinbrenner, Bernhard Sch\"olkopf, Dieter B\"uchler
|
AIMY: An Open-source Table Tennis Ball Launcher for Versatile and
High-fidelity Trajectory Generation
|
Accepted for ICRA 2023
| null | null | null |
cs.RO cs.AR cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To approach the level of advanced human players in table tennis with robots,
generating varied ball trajectories in a reproducible and controlled manner is
essential. Current ball launchers used in robot table tennis either do not
provide an interface for automatic control or are limited in their capabilities
to adapt speed, direction, and spin of the ball. For these reasons, we present
AIMY, a three-wheeled open-hardware and open-source table tennis ball launcher,
which can generate ball speeds and spins of up to 15.4 ms-1 and 192 s-1,
respectively, which are comparable to advanced human players. The wheel speeds,
launch orientation and time can be fully controlled via an open Ethernet or
Wi-Fi interface. We provide a detailed overview of the core design features, as
well as open source the software to encourage distribution and duplication
within and beyond the robot table tennis research community. We also
extensively evaluate the ball launcher's accuracy for different system settings
and learn to launch a ball to desired locations. With this ball launcher, we
enable long-duration training of robot table tennis approaches where the
complexity of the ball trajectory can be automatically adjusted, enabling
large-scale real-world online reinforcement learning for table tennis robots.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 09:37:40 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Oct 2022 09:16:43 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Mar 2023 10:49:15 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Dittrich",
"Alexander",
""
],
[
"Schneider",
"Jan",
""
],
[
"Guist",
"Simon",
""
],
[
"Gürtler",
"Nico",
""
],
[
"Ott",
"Heiko",
""
],
[
"Steinbrenner",
"Thomas",
""
],
[
"Schölkopf",
"Bernhard",
""
],
[
"Büchler",
"Dieter",
""
]
] |
new_dataset
| 0.999814 |
2211.08542
|
Tianxing Xu
|
Tian-Xing Xu, Yuan-Chen Guo, Yu-Kun Lai, Song-Hai Zhang
|
CXTrack: Improving 3D Point Cloud Tracking with Contextual Information
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D single object tracking plays an essential role in many applications, such
as autonomous driving. It remains a challenging problem due to the large
appearance variation and the sparsity of points caused by occlusion and limited
sensor capabilities. Therefore, contextual information across two consecutive
frames is crucial for effective object tracking. However, points containing
such useful information are often overlooked and cropped out in existing
methods, leading to insufficient use of important contextual knowledge. To
address this issue, we propose CXTrack, a novel transformer-based network for
3D object tracking, which exploits ConteXtual information to improve the
tracking results. Specifically, we design a target-centric transformer network
that directly takes point features from two consecutive frames and the previous
bounding box as input to explore contextual information and implicitly
propagate target cues. To achieve accurate localization for objects of all
sizes, we propose a transformer-based localization head with a novel center
embedding module to distinguish the target from distractors. Extensive
experiments on three large-scale datasets, KITTI, nuScenes and Waymo Open
Dataset, show that CXTrack achieves state-of-the-art tracking performance while
running at 34 FPS.
|
[
{
"version": "v1",
"created": "Sat, 12 Nov 2022 11:29:01 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 02:34:48 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Xu",
"Tian-Xing",
""
],
[
"Guo",
"Yuan-Chen",
""
],
[
"Lai",
"Yu-Kun",
""
],
[
"Zhang",
"Song-Hai",
""
]
] |
new_dataset
| 0.999345 |
2211.14563
|
Arushi Goel
|
Arushi Goel, Basura Fernando, Frank Keller and Hakan Bilen
|
Who are you referring to? Coreference resolution in image narrations
|
15 pages
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Coreference resolution aims to identify words and phrases which refer to same
entity in a text, a core task in natural language processing. In this paper, we
extend this task to resolving coreferences in long-form narrations of visual
scenes. First we introduce a new dataset with annotated coreference chains and
their bounding boxes, as most existing image-text datasets only contain short
sentences without coreferring expressions or labeled chains. We propose a new
technique that learns to identify coreference chains using weak supervision,
only from image-text pairs and a regularization using prior linguistic
knowledge. Our model yields large performance gains over several strong
baselines in resolving coreferences. We also show that coreference resolution
helps improving grounding narratives in images.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 13:33:42 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Mar 2023 15:12:13 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Goel",
"Arushi",
""
],
[
"Fernando",
"Basura",
""
],
[
"Keller",
"Frank",
""
],
[
"Bilen",
"Hakan",
""
]
] |
new_dataset
| 0.999676 |
2212.00648
|
Sagi Eppel
|
Manuel S. Drehwald, Sagi Eppel, Jolina Li, Han Hao, Alan Aspuru-Guzik
|
One-shot recognition of any material anywhere using contrastive learning
with physics-based rendering
|
for associated code and dataset, see
https://zenodo.org/record/7390166#.Y5ku6mHMJH4 or
https://e1.pcloud.link/publink/show?code=kZIiSQZCYU5M4HOvnQykql9jxF4h0KiC5MX
and https://icedrive.net/s/A13FWzZ8V2aP9T4ufGQ1N3fBZxDF
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Visual recognition of materials and their states is essential for
understanding most aspects of the world, from determining whether food is
cooked, metal is rusted, or a chemical reaction has occurred. However, current
image recognition methods are limited to specific classes and properties and
can't handle the vast number of material states in the world. To address this,
we present MatSim: the first dataset and benchmark for computer vision-based
recognition of similarities and transitions between materials and textures,
focusing on identifying any material under any conditions using one or a few
examples. The dataset contains synthetic and natural images. The synthetic
images were rendered using giant collections of textures, objects, and
environments generated by computer graphics artists. We use mixtures and
gradual transitions between materials to allow the system to learn cases with
smooth transitions between states (like gradually cooked food). We also render
images with materials inside transparent containers to support beverage and
chemistry lab use cases. We use this dataset to train a siamese net that
identifies the same material in different objects, mixtures, and environments.
The descriptor generated by this net can be used to identify the states of
materials and their subclasses using a single image. We also present the first
few-shot material recognition benchmark with images from a wide range of
fields, including the state of foods and drinks, types of grounds, and many
other use cases. We show that a net trained on the MatSim synthetic dataset
outperforms state-of-the-art models like Clip on the benchmark and also
achieves good results on other unsupervised material classification tasks.
|
[
{
"version": "v1",
"created": "Thu, 1 Dec 2022 16:49:53 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Dec 2022 02:12:27 GMT"
},
{
"version": "v3",
"created": "Mon, 13 Mar 2023 04:06:24 GMT"
},
{
"version": "v4",
"created": "Fri, 17 Mar 2023 04:40:59 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Drehwald",
"Manuel S.",
""
],
[
"Eppel",
"Sagi",
""
],
[
"Li",
"Jolina",
""
],
[
"Hao",
"Han",
""
],
[
"Aspuru-Guzik",
"Alan",
""
]
] |
new_dataset
| 0.999826 |
2212.05993
|
Jiabao Lei
|
Jiabao Lei, Jiapeng Tang, Kui Jia
|
RGBD2: Generative Scene Synthesis via Incremental View Inpainting using
RGBD Diffusion Models
|
CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the challenge of recovering an underlying scene geometry and
colors from a sparse set of RGBD view observations. In this work, we present a
new solution termed RGBD$^2$ that sequentially generates novel RGBD views along
a camera trajectory, and the scene geometry is simply the fusion result of
these views. More specifically, we maintain an intermediate surface mesh used
for rendering new RGBD views, which subsequently becomes complete by an
inpainting network; each rendered RGBD view is later back-projected as a
partial surface and is supplemented into the intermediate mesh. The use of
intermediate mesh and camera projection helps solve the tough problem of
multi-view inconsistency. We practically implement the RGBD inpainting network
as a versatile RGBD diffusion model, which is previously used for 2D generative
modeling; we make a modification to its reverse diffusion process to enable our
use. We evaluate our approach on the task of 3D scene synthesis from sparse
RGBD inputs; extensive experiments on the ScanNet dataset demonstrate the
superiority of our approach over existing ones. Project page:
https://jblei.site/proj/rgbd-diffusion.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 15:50:00 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Mar 2023 07:27:15 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Lei",
"Jiabao",
""
],
[
"Tang",
"Jiapeng",
""
],
[
"Jia",
"Kui",
""
]
] |
new_dataset
| 0.997258 |
2301.01113
|
Thanh Le-Cong Le-Cong Thanh
|
Thanh Le-Cong, Duc-Minh Luong, Xuan Bach D. Le, David Lo, Nhat-Hoa
Tran, Bui Quang-Huy and Quyet-Thang Huynh
|
Invalidator: Automated Patch Correctness Assessment via Semantic and
Syntactic Reasoning
| null |
IEEE Transactions on Software Engineering, 2023
|
10.1109/TSE.2023.3255177
| null |
cs.SE cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Automated program repair (APR) faces the challenge of test overfitting, where
generated patches pass validation tests but fail to generalize. Existing
methods for patch assessment involve generating new tests or manual inspection,
which can be time-consuming or biased. In this paper, we propose a novel
technique, INVALIDATOR, to automatically assess the correctness of
APR-generated patches via semantic and syntactic reasoning. INVALIDATOR
leverages program invariants to reason about program semantics while also
capturing program syntax through language semantics learned from a large code
corpus using a pre-trained language model. Given a buggy program and the
developer-patched program, INVALIDATOR infers likely invariants on both
programs. Then, INVALIDATOR determines that an APR-generated patch overfits if:
(1) it violates correct specifications or (2) maintains erroneous behaviors
from the original buggy program. In case our approach fails to determine an
overfitting patch based on invariants, INVALIDATOR utilizes a trained model
from labeled patches to assess patch correctness based on program syntax. The
benefit of INVALIDATOR is threefold. First, INVALIDATOR leverages both semantic
and syntactic reasoning to enhance its discriminative capability. Second,
INVALIDATOR does not require new test cases to be generated, but instead only
relies on the current test suite and uses invariant inference to generalize
program behaviors. Third, INVALIDATOR is fully automated. Experimental results
demonstrate that INVALIDATOR outperforms existing methods in terms of Accuracy
and F-measure, correctly identifying 79% of overfitting patches and detecting
23% more overfitting patches than the best baseline.
|
[
{
"version": "v1",
"created": "Tue, 3 Jan 2023 14:16:32 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Mar 2023 10:56:58 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Le-Cong",
"Thanh",
""
],
[
"Luong",
"Duc-Minh",
""
],
[
"Le",
"Xuan Bach D.",
""
],
[
"Lo",
"David",
""
],
[
"Tran",
"Nhat-Hoa",
""
],
[
"Quang-Huy",
"Bui",
""
],
[
"Huynh",
"Quyet-Thang",
""
]
] |
new_dataset
| 0.987317 |
2302.03008
|
Nooshin Yousefzadeh Hosseini
|
Nooshin Yousefzadeh, Charlie Tran, Adolfo Ramirez-Zamora, Jinghua
Chen, Ruogu Fang, My T. Thai
|
LAVA: Granular Neuron-Level Explainable AI for Alzheimer's Disease
Assessment from Fundus Images
|
27 pages, 11 figures
| null | null | null |
cs.LG cs.AI eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Alzheimer's Disease (AD) is a progressive neurodegenerative disease and the
leading cause of dementia. Early diagnosis is critical for patients to benefit
from potential intervention and treatment. The retina has been hypothesized as
a diagnostic site for AD detection owing to its anatomical connection with the
brain. Developed AI models for this purpose have yet to provide a rational
explanation about the decision and neither infer the stage of disease's
progression. Along this direction, we propose a novel model-agnostic
explainable-AI framework, called Granular Neuron-level Explainer (LAVA), an
interpretation prototype that probes into intermediate layers of the
Convolutional Neural Network (CNN) models to assess the AD continuum directly
from the retinal imaging without longitudinal or clinical evaluation. This
method is applied to validate the retinal vasculature as a biomarker and
diagnostic modality for Alzheimer's Disease (AD) evaluation. UK Biobank
cognitive tests and vascular morphological features suggest LAVA shows strong
promise and effectiveness in identifying AD stages across the progression
continuum.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 18:43:10 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 20:58:37 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Yousefzadeh",
"Nooshin",
""
],
[
"Tran",
"Charlie",
""
],
[
"Ramirez-Zamora",
"Adolfo",
""
],
[
"Chen",
"Jinghua",
""
],
[
"Fang",
"Ruogu",
""
],
[
"Thai",
"My T.",
""
]
] |
new_dataset
| 0.99806 |
2302.10727
|
Long Wang
|
Isabella Huang, Qianwen Zhao, Maxine Fontaine, Long Wang
|
Design Project of an Open-Source, Low-Cost, and Lightweight Robotic
Manipulator for High School Students
|
Accepted to ASEE Zone 1 Conference
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, there is an increasing interest in high school robotics
extracurriculars such as robotics clubs and robotics competitions. The growing
demand is a result of more ubiquitous open-source software and affordable
off-the-shelf hardware kits, which significantly help lower the barrier for
entry-level robotics hobbyists. In this project, we present an open-source,
low-cost, and lightweight robotic manipulator designed and developed by a high
school researcher under the guidance of a university faculty and a Ph.D.
student. We believe the presented project is suitable for high school robotics
research and educational activities. Our open-source package consists of
mechanical design models, mechatronics specifications, and software program
source codes. The mechanical design models include CAD (Computer Aided Design)
files that are ready for prototyping (3D printing technology) and serve as an
assembly guide accommodated with a complete bill of materials. Electrical
wiring diagrams and low-level controllers are documented in detail as part of
the open-source software package. The educational objective of this project is
to enable high school student teams to replicate and build a robotic
manipulator. The engineering experience that high school students acquire in
the proposed project is full-stack, including mechanical design, mechatronics,
and programming. The project significantly enriches their hands-on engineering
experience in a project-based environment. Throughout this project, we
discovered that the high school researcher was able to apply multidisciplinary
knowledge from K-12 STEM courses to build the robotic manipulator. The
researcher was able to go through a system engineering design and development
process and obtain skills to use professional engineering tools including
SolidWorks and Arduino microcontrollers.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 15:23:36 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 20:58:21 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Huang",
"Isabella",
""
],
[
"Zhao",
"Qianwen",
""
],
[
"Fontaine",
"Maxine",
""
],
[
"Wang",
"Long",
""
]
] |
new_dataset
| 0.973306 |
2303.00952
|
Kailun Yang
|
Kunyu Peng, David Schneider, Alina Roitberg, Kailun Yang, Jiaming
Zhang, M. Saquib Sarfraz, Rainer Stiefelhagen
|
MuscleMap: Towards Video-based Activated Muscle Group Estimation
|
The datasets and code will be publicly available at
https://github.com/KPeng9510/MuscleMap
| null | null | null |
cs.CV cs.RO eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we tackle the new task of video-based Activated Muscle Group
Estimation (AMGE) aiming at identifying active muscle regions during physical
activity. To this intent, we provide the MuscleMap136 dataset featuring >15K
video clips with 136 different activities and 20 labeled muscle groups. This
dataset opens the vistas to multiple video-based applications in sports and
rehabilitation medicine. We further complement the main MuscleMap136 dataset,
which specifically targets physical exercise, with Muscle-UCF90 and
Muscle-HMDB41, which are new variants of the well-known activity recognition
benchmarks extended with AMGE annotations. To make the AMGE model applicable in
real-life situations, it is crucial to ensure that the model can generalize
well to types of physical activities not present during training and involving
new combinations of activated muscles. To achieve this, our benchmark also
covers an evaluation setting where the model is exposed to activity types
excluded from the training set. Our experiments reveal that generalizability of
existing architectures adapted for the AMGE task remains a challenge.
Therefore, we also propose a new approach, TransM3E, which employs a
transformer-based model with cross-modal multi-label knowledge distillation and
surpasses all popular video classification models when dealing with both,
previously seen and new types of physical activities. The datasets and code
will be publicly available at https://github.com/KPeng9510/MuscleMap.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 04:12:53 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Mar 2023 05:55:02 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Peng",
"Kunyu",
""
],
[
"Schneider",
"David",
""
],
[
"Roitberg",
"Alina",
""
],
[
"Yang",
"Kailun",
""
],
[
"Zhang",
"Jiaming",
""
],
[
"Sarfraz",
"M. Saquib",
""
],
[
"Stiefelhagen",
"Rainer",
""
]
] |
new_dataset
| 0.993003 |
2303.08954
|
Aditya Gupta
|
Rahul Goel, Waleed Ammar, Aditya Gupta, Siddharth Vashishtha, Motoki
Sano, Faiz Surani, Max Chang, HyunJeong Choe, David Greene, Kyle He, Rattima
Nitisaroj, Anna Trukhina, Shachi Paul, Pararth Shah, Rushin Shah and Zhou Yu
|
PRESTO: A Multilingual Dataset for Parsing Realistic Task-Oriented
Dialogs
|
PRESTO v1 Release
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Research interest in task-oriented dialogs has increased as systems such as
Google Assistant, Alexa and Siri have become ubiquitous in everyday life.
However, the impact of academic research in this area has been limited by the
lack of datasets that realistically capture the wide array of user pain points.
To enable research on some of the more challenging aspects of parsing realistic
conversations, we introduce PRESTO, a public dataset of over 550K contextual
multilingual conversations between humans and virtual assistants. PRESTO
contains a diverse array of challenges that occur in real-world NLU tasks such
as disfluencies, code-switching, and revisions. It is the only large scale
human generated conversational parsing dataset that provides structured context
such as a user's contacts and lists for each example. Our mT5 model based
baselines demonstrate that the conversational phenomenon present in PRESTO are
challenging to model, which is further pronounced in a low-resource setup.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 21:51:13 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Mar 2023 02:26:52 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Goel",
"Rahul",
""
],
[
"Ammar",
"Waleed",
""
],
[
"Gupta",
"Aditya",
""
],
[
"Vashishtha",
"Siddharth",
""
],
[
"Sano",
"Motoki",
""
],
[
"Surani",
"Faiz",
""
],
[
"Chang",
"Max",
""
],
[
"Choe",
"HyunJeong",
""
],
[
"Greene",
"David",
""
],
[
"He",
"Kyle",
""
],
[
"Nitisaroj",
"Rattima",
""
],
[
"Trukhina",
"Anna",
""
],
[
"Paul",
"Shachi",
""
],
[
"Shah",
"Pararth",
""
],
[
"Shah",
"Rushin",
""
],
[
"Yu",
"Zhou",
""
]
] |
new_dataset
| 0.99936 |
2303.09306
|
Md Zarif Ul Alam
|
HAZ Sameen Shahgir, Ramisa Alam, Md. Zarif Ul Alam
|
BanglaCoNER: Towards Robust Bangla Complex Named Entity Recognition
|
Winning Solution for the Bangla Complex Named Entity Recognition
Challenge
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Named Entity Recognition (NER) is a fundamental task in natural language
processing that involves identifying and classifying named entities in text.
But much work hasn't been done for complex named entity recognition in Bangla,
despite being the seventh most spoken language globally. CNER is a more
challenging task than traditional NER as it involves identifying and
classifying complex and compound entities, which are not common in Bangla
language. In this paper, we present the winning solution of Bangla Complex
Named Entity Recognition Challenge - addressing the CNER task on BanglaCoNER
dataset using two different approaches, namely Conditional Random Fields (CRF)
and finetuning transformer based Deep Learning models such as BanglaBERT.
The dataset consisted of 15300 sentences for training and 800 sentences for
validation, in the .conll format. Exploratory Data Analysis (EDA) on the
dataset revealed that the dataset had 7 different NER tags, with notable
presence of English words, suggesting that the dataset is synthetic and likely
a product of translation.
We experimented with a variety of feature combinations including Part of
Speech (POS) tags, word suffixes, Gazetteers, and cluster information from
embeddings, while also finetuning the BanglaBERT (large) model for NER. We
found that not all linguistic patterns are immediately apparent or even
intuitive to humans, which is why Deep Learning based models has proved to be
the more effective model in NLP, including CNER task. Our fine tuned BanglaBERT
(large) model achieves an F1 Score of 0.79 on the validation set. Overall, our
study highlights the importance of Bangla Complex Named Entity Recognition,
particularly in the context of synthetic datasets. Our findings also
demonstrate the efficacy of Deep Learning models such as BanglaBERT for NER in
Bangla language.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 13:31:31 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Mar 2023 15:13:01 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Shahgir",
"HAZ Sameen",
""
],
[
"Alam",
"Ramisa",
""
],
[
"Alam",
"Md. Zarif Ul",
""
]
] |
new_dataset
| 0.982293 |
2303.09602
|
Andr\'e Borgato Morelli
|
Andre Borgato Morelli, Andr\'e de Carvalho Fiedler, Andr\'e Luiz Cunha
|
Um banco de dados de empregos formais georreferenciados em cidades
brasileiras
|
9 pages, in Portuguese language, 3 figures, 1 table, Data
presentation paper
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Currently, transport planning has changed its paradigm from projects oriented
to guarantee service levels to projects oriented to guarantee accessibility to
opportunities. In this context, a number of studies and tools aimed at
calculating accessibility are being made available, however these tools depend
on job location data that are not always easily accessible. Thus, this work
proposes the creation of a database with the locations of formal jobs in
Brazilian cities. The method uses the RAIS jobs database and the CNEFE street
faces database to infer the location of jobs in urban regions from the zip code
and the number of non-residential addresses on street faces. As a result, jobs
can be located more accurately in large and medium-sized cities and
approximately in single zip code cities. Finally, the databases are made
available openly so that researchers and planning professionals can easily
apply accessibility analyzes throughout the national territory.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 19:08:07 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Morelli",
"Andre Borgato",
""
],
[
"Fiedler",
"André de Carvalho",
""
],
[
"Cunha",
"André Luiz",
""
]
] |
new_dataset
| 0.999251 |
2303.09648
|
Jinfan Zhou
|
Jinfan Zhou, William Muirhead, Simon C. Williams, Danail Stoyanov,
Hani J. Marcus, and Evangelos B. Mazomenos
|
Shifted-Windows Transformers for the Detection of Cerebral Aneurysms in
Microsurgery
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Purpose: Microsurgical Aneurysm Clipping Surgery (MACS) carries a high risk
for intraoperative aneurysm rupture. Automated recognition of instances when
the aneurysm is exposed in the surgical video would be a valuable reference
point for neuronavigation, indicating phase transitioning and more importantly
designating moments of high risk for rupture. This article introduces the MACS
dataset containing 16 surgical videos with frame-level expert annotations and
proposes a learning methodology for surgical scene understanding identifying
video frames with the aneurysm present in the operating microscope's
field-of-view. Methods: Despite the dataset imbalance (80% no presence, 20%
presence) and developed without explicit annotations, we demonstrate the
applicability of Transformer-based deep learning architectures (MACSSwin-T,
vidMACSSwin-T) to detect the aneurysm and classify MACS frames accordingly. We
evaluate the proposed models in multiple-fold cross-validation experiments with
independent sets and in an unseen set of 15 images against 10 human experts
(neurosurgeons). Results: Average (across folds) accuracy of 80.8% (range
78.5%-82.4%) and 87.1% (range 85.1%-91.3%) is obtained for the image- and
video-level approach respectively, demonstrating that the models effectively
learn the classification task. Qualitative evaluation of the models' class
activation maps show these to be localized on the aneurysm's actual location.
Depending on the decision threshold, MACSWin-T achieves 66.7% to 86.7% accuracy
in the unseen images, compared to 82% of human raters, with moderate to strong
correlation.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 20:58:48 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Zhou",
"Jinfan",
""
],
[
"Muirhead",
"William",
""
],
[
"Williams",
"Simon C.",
""
],
[
"Stoyanov",
"Danail",
""
],
[
"Marcus",
"Hani J.",
""
],
[
"Mazomenos",
"Evangelos B.",
""
]
] |
new_dataset
| 0.997705 |
2303.09655
|
Vani Nagarajan
|
Vani Nagarajan, Milind Kulkarni
|
RT-DBSCAN: Accelerating DBSCAN using Ray Tracing Hardware
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
General Purpose computing on Graphical Processing Units (GPGPU) has resulted
in unprecedented levels of speedup over its CPU counterparts, allowing
programmers to harness the computational power of GPU shader cores to
accelerate other computing applications. But this style of acceleration is best
suited for regular computations (e.g., linear algebra). Recent GPUs feature new
Ray Tracing (RT) cores that instead speed up the irregular process of ray
tracing using Bounding Volume Hierarchies. While these cores seem limited in
functionality, they can be used to accelerate n-body problems by leveraging RT
cores to accelerate the required distance computations. In this work, we
propose RT-DBSCAN, the first RT-accelerated DBSCAN implementation. We use RT
cores to accelerate Density-Based Clustering of Applications with Noise
(DBSCAN) by translating fixed-radius nearest neighbor queries to ray tracing
queries. We show that leveraging the RT hardware results in speedups between
1.3x to 4x over current state-of-the-art, GPU-based DBSCAN implementations.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 21:24:06 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Nagarajan",
"Vani",
""
],
[
"Kulkarni",
"Milind",
""
]
] |
new_dataset
| 0.982692 |
2303.09694
|
Balsam Alkouz
|
Shilong Guo, Balsam Alkouz, Babar Shahzaad, Abdallah Lakhdari, Athman
Bouguettaya
|
Drone Formation for Efficient Swarm Energy Consumption
|
3 pages, 7 figures. This is an accepted demo paper and it will appear
in The 21st International Conference on Pervasive Computing and
Communications (PerCom 2023)
| null | null | null |
cs.RO eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
We demonstrate formation flying for drone swarm services. A set of drones fly
in four different swarm formations. A dataset is collected to study the effect
of formation flying on energy consumption. We conduct a set of experiments to
study the effect of wind on formation flying. We examine the forces the drones
exert on each other when flying in a formation. We finally identify and
classify the formations that conserve most energy under varying wind
conditions. The collected dataset aims at providing researchers data to conduct
further research in swarm-based drone service delivery. Demo:
https://youtu.be/NnucUWhUwLs
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 23:52:52 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Guo",
"Shilong",
""
],
[
"Alkouz",
"Balsam",
""
],
[
"Shahzaad",
"Babar",
""
],
[
"Lakhdari",
"Abdallah",
""
],
[
"Bouguettaya",
"Athman",
""
]
] |
new_dataset
| 0.998687 |
2303.09733
|
Zhengyi Liu
|
Zhengyi Liu, Xiaoshen Huang, Guanghui Zhang, Xianyong Fang, Linbo
Wang, Bin Tang
|
Scribble-Supervised RGB-T Salient Object Detection
|
ICME2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Salient object detection segments attractive objects in scenes. RGB and
thermal modalities provide complementary information and scribble annotations
alleviate large amounts of human labor. Based on the above facts, we propose a
scribble-supervised RGB-T salient object detection model. By a four-step
solution (expansion, prediction, aggregation, and supervision), label-sparse
challenge of scribble-supervised method is solved. To expand scribble
annotations, we collect the superpixels that foreground scribbles pass through
in RGB and thermal images, respectively. The expanded multi-modal labels
provide the coarse object boundary. To further polish the expanded labels, we
propose a prediction module to alleviate the sharpness of boundary. To play the
complementary roles of two modalities, we combine the two into aggregated
pseudo labels. Supervised by scribble annotations and pseudo labels, our model
achieves the state-of-the-art performance on the relabeled RGBT-S dataset.
Furthermore, the model is applied to RGB-D and video scribble-supervised
applications, achieving consistently excellent performance.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 02:27:59 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Liu",
"Zhengyi",
""
],
[
"Huang",
"Xiaoshen",
""
],
[
"Zhang",
"Guanghui",
""
],
[
"Fang",
"Xianyong",
""
],
[
"Wang",
"Linbo",
""
],
[
"Tang",
"Bin",
""
]
] |
new_dataset
| 0.962366 |
2303.09735
|
Qibin Hou
|
Yupeng Zhou, Zhen Li, Chun-Le Guo, Song Bai, Ming-Ming Cheng, Qibin
Hou
|
SRFormer: Permuted Self-Attention for Single Image Super-Resolution
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Previous works have shown that increasing the window size for
Transformer-based image super-resolution models (e.g., SwinIR) can
significantly improve the model performance but the computation overhead is
also considerable. In this paper, we present SRFormer, a simple but novel
method that can enjoy the benefit of large window self-attention but introduces
even less computational burden. The core of our SRFormer is the permuted
self-attention (PSA), which strikes an appropriate balance between the channel
and spatial information for self-attention. Our PSA is simple and can be easily
applied to existing super-resolution networks based on window self-attention.
Without any bells and whistles, we show that our SRFormer achieves a 33.86dB
PSNR score on the Urban100 dataset, which is 0.46dB higher than that of SwinIR
but uses fewer parameters and computations. We hope our simple and effective
approach can serve as a useful tool for future research in super-resolution
model design.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 02:38:44 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Zhou",
"Yupeng",
""
],
[
"Li",
"Zhen",
""
],
[
"Guo",
"Chun-Le",
""
],
[
"Bai",
"Song",
""
],
[
"Cheng",
"Ming-Ming",
""
],
[
"Hou",
"Qibin",
""
]
] |
new_dataset
| 0.993089 |
2303.09743
|
Tzu-Sheng Kuo
|
Tzu-Sheng Kuo, Hong Shen, Jisoo Geum, Nev Jones, Jason I. Hong, Haiyi
Zhu, Kenneth Holstein
|
Understanding Frontline Workers' and Unhoused Individuals' Perspectives
on AI Used in Homeless Services
| null |
Proceedings of the 2023 CHI Conference on Human Factors in
Computing Systems (CHI '23)
|
10.1145/3544548.3580882
| null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Recent years have seen growing adoption of AI-based decision-support systems
(ADS) in homeless services, yet we know little about stakeholder desires and
concerns surrounding their use. In this work, we aim to understand impacted
stakeholders' perspectives on a deployed ADS that prioritizes scarce housing
resources. We employed AI lifecycle comicboarding, an adapted version of the
comicboarding method, to elicit stakeholder feedback and design ideas across
various components of an AI system's design. We elicited feedback from county
workers who operate the ADS daily, service providers whose work is directly
impacted by the ADS, and unhoused individuals in the region. Our participants
shared concerns and design suggestions around the AI system's overall
objective, specific model design choices, dataset selection, and use in
deployment. Our findings demonstrate that stakeholders, even without AI
knowledge, can provide specific and critical feedback on an AI system's design
and deployment, if empowered to do so.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 02:46:45 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Kuo",
"Tzu-Sheng",
""
],
[
"Shen",
"Hong",
""
],
[
"Geum",
"Jisoo",
""
],
[
"Jones",
"Nev",
""
],
[
"Hong",
"Jason I.",
""
],
[
"Zhu",
"Haiyi",
""
],
[
"Holstein",
"Kenneth",
""
]
] |
new_dataset
| 0.95026 |
2303.09785
|
Veerendra Bobbili Raj Kumar
|
Darshan Gera, Badveeti Naveen Siva Kumar, Bobbili Veerendra Raj Kumar,
S Balasubramanian
|
ABAW : Facial Expression Recognition in the wild
|
6 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The fifth Affective Behavior Analysis in-the-wild (ABAW) competition has
multiple challenges such as Valence-Arousal Estimation Challenge, Expression
Classification Challenge, Action Unit Detection Challenge, Emotional Reaction
Intensity Estimation Challenge. In this paper we have dealt only expression
classification challenge using multiple approaches such as fully supervised,
semi-supervised and noisy label approach. Our approach using noise aware model
has performed better than baseline model by 10.46% and semi supervised model
has performed better than baseline model by 9.38% and the fully supervised
model has performed better than the baseline by 9.34%
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 06:01:04 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Gera",
"Darshan",
""
],
[
"Kumar",
"Badveeti Naveen Siva",
""
],
[
"Kumar",
"Bobbili Veerendra Raj",
""
],
[
"Balasubramanian",
"S",
""
]
] |
new_dataset
| 0.952431 |
2303.09791
|
Shuai Fu
|
Shuai Fu, Tim Dwyer, Peter J. Stuckey, Jackson Wain, Jesse Linossier
|
ChameleonIDE: Untangling Type Errors Through Interactive Visualization
and Exploration
| null | null | null | null |
cs.HC cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Dynamically typed programming languages are popular in education and the
software industry. While presenting a low barrier to entry, they suffer from
run-time type errors and longer-term problems in code quality and
maintainability. Statically typed languages, while showing strength in these
aspects, lack in learnability and ease of use. In particular, fixing type
errors poses challenges to both novice users and experts. Further,
compiler-type error messages are presented in a static way that is biased
toward the first occurrence of the error in the program code. To help users
resolve such type errors, we introduce ChameleonIDE, a type debugging tool that
presents type errors to the user in an unbiased way, allowing them to explore
the full context of where the errors could occur. Programmers can interactively
verify the steps of reasoning against their intention. Through three studies
involving real programmers, we showed that ChameleonIDE is more effective in
fixing type errors than traditional text-based error messages. This difference
is more significant in harder tasks. Further, programmers actively using
ChameleonIDE's interactive features are shown to be more efficient in fixing
type errors than passively reading the type error output.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 06:24:52 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Fu",
"Shuai",
""
],
[
"Dwyer",
"Tim",
""
],
[
"Stuckey",
"Peter J.",
""
],
[
"Wain",
"Jackson",
""
],
[
"Linossier",
"Jesse",
""
]
] |
new_dataset
| 0.978142 |
2303.09797
|
Haozhe Wu
|
Haozhe Wu, Jia Jia, Junliang Xing, Hongwei Xu, Xiangyuan Wang, Jelo
Wang
|
MMFace4D: A Large-Scale Multi-Modal 4D Face Dataset for Audio-Driven 3D
Face Animation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Audio-Driven Face Animation is an eagerly anticipated technique for
applications such as VR/AR, games, and movie making. With the rapid development
of 3D engines, there is an increasing demand for driving 3D faces with audio.
However, currently available 3D face animation datasets are either
scale-limited or quality-unsatisfied, which hampers further developments of
audio-driven 3D face animation. To address this challenge, we propose MMFace4D,
a large-scale multi-modal 4D (3D sequence) face dataset consisting of 431
identities, 35,904 sequences, and 3.9 million frames. MMFace4D has three
appealing characteristics: 1) highly diversified subjects and corpus, 2)
synchronized audio and 3D mesh sequence with high-resolution face details, and
3) low storage cost with a new efficient compression algorithm on 3D mesh
sequences. These characteristics enable the training of high-fidelity,
expressive, and generalizable face animation models. Upon MMFace4D, we
construct a challenging benchmark of audio-driven 3D face animation with a
strong baseline, which enables non-autoregressive generation with fast
inference speed and outperforms the state-of-the-art autoregressive method. The
whole benchmark will be released.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 06:43:08 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Wu",
"Haozhe",
""
],
[
"Jia",
"Jia",
""
],
[
"Xing",
"Junliang",
""
],
[
"Xu",
"Hongwei",
""
],
[
"Wang",
"Xiangyuan",
""
],
[
"Wang",
"Jelo",
""
]
] |
new_dataset
| 0.99987 |
2303.09820
|
Giuseppe Filippone
|
Carolin Hannusch and Giuseppe Filippone
|
Decoding algorithm for HL-codes and performance of the DHH-cryptosystem
-- a candidate for post-quantum cryptography
|
24 pages, 4 figures, 14 references
| null | null | null |
cs.CR cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We give a decoding algorithm for a class of error-correcting codes, which can
be used in the DHH-cryptosystem, which is a candidate for post-quantum
cryptography, since it is of McEliece type. Furthermore, we implement the
encryption and decryption algorithms for this cryptosystem and investigate its
performance.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 08:01:54 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Hannusch",
"Carolin",
""
],
[
"Filippone",
"Giuseppe",
""
]
] |
new_dataset
| 0.982252 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.