id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.10351
|
Fan Liu
|
Fan Liu, Siqi Lai, Yansong Ning, Hao Liu
|
Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated
Graph Neural Network
| null | null | null | null |
cs.LG cs.AI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Federated Graph Neural Network (FedGNN) has recently emerged as a rapidly
growing research topic, as it integrates the strengths of graph neural networks
and federated learning to enable advanced machine learning applications without
direct access to sensitive data. Despite its advantages, the distributed nature
of FedGNN introduces additional vulnerabilities, particularly backdoor attacks
stemming from malicious participants. Although graph backdoor attacks have been
explored, the compounded complexity introduced by the combination of GNNs and
federated learning has hindered a comprehensive understanding of these attacks,
as existing research lacks extensive benchmark coverage and in-depth analysis
of critical factors. To address these limitations, we propose Bkd-FedGNN, a
benchmark for backdoor attacks on FedGNN. Specifically, Bkd-FedGNN decomposes
the graph backdoor attack into trigger generation and injection steps, and
extending the attack to the node-level federated setting, resulting in a
unified framework that covers both node-level and graph-level classification
tasks. Moreover, we thoroughly investigate the impact of multiple critical
factors in backdoor attacks on FedGNN. These factors are categorized into
global-level and local-level factors, including data distribution, the number
of malicious attackers, attack time, overlapping rate, trigger size, trigger
type, trigger position, and poisoning rate. Finally, we conduct comprehensive
evaluations on 13 benchmark datasets and 13 critical factors, comprising 1,725
experimental configurations for node-level and graph-level tasks from six
domains. These experiments encompass over 8,000 individual tests, allowing us
to provide a thorough evaluation and insightful observations that advance our
understanding of backdoor attacks on FedGNN.The Bkd-FedGNN benchmark is
publicly available at https://github.com/usail-hkust/BkdFedGCN.
|
[
{
"version": "v1",
"created": "Sat, 17 Jun 2023 13:51:33 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Liu",
"Fan",
""
],
[
"Lai",
"Siqi",
""
],
[
"Ning",
"Yansong",
""
],
[
"Liu",
"Hao",
""
]
] |
new_dataset
| 0.960819 |
2306.10354
|
Yunlong Tang
|
Yunlong Tang, Jinrui Zhang, Xiangchen Wang, Teng Wang, Feng Zheng
|
LLMVA-GEBC: Large Language Model with Video Adapter for Generic Event
Boundary Captioning
|
Winner solution to Generic Event Boundary Captioning task in LOVEU
Challenge (CVPR 2023 workshop)
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Our winning entry for the CVPR 2023 Generic Event Boundary Captioning (GEBC)
competition is detailed in this paper. Unlike conventional video captioning
tasks, GEBC demands that the captioning model possess an understanding of
immediate changes in status around the designated video boundary, making it a
difficult task. This paper proposes an effective model LLMVA-GEBC (Large
Language Model with Video Adapter for Generic Event Boundary Captioning): (1)
We utilize a pretrained LLM for generating human-like captions with high
quality. (2) To adapt the model to the GEBC task, we take the video Q-former as
an adapter and train it with the frozen visual feature extractors and LLM. Our
proposed method achieved a 76.14 score on the test set and won the first place
in the challenge. Our code is available at
https://github.com/zjr2000/LLMVA-GEBC .
|
[
{
"version": "v1",
"created": "Sat, 17 Jun 2023 13:55:54 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Tang",
"Yunlong",
""
],
[
"Zhang",
"Jinrui",
""
],
[
"Wang",
"Xiangchen",
""
],
[
"Wang",
"Teng",
""
],
[
"Zheng",
"Feng",
""
]
] |
new_dataset
| 0.989588 |
2306.10372
|
Zhou Tang
|
Zhou Tang, and Zhiwu Zhang
|
Ladder: A software to label images, detect objects and deploy models
recurrently for object detection
|
5 pages, 2 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object Detection (OD) is a computer vision technology that can locate and
classify objects in images and videos, which has the potential to significantly
improve efficiency in precision agriculture. To simplify OD application
process, we developed Ladder - a software that provides users with a friendly
graphic user interface (GUI) that allows for efficient labelling of training
datasets, training OD models, and deploying the trained model. Ladder was
designed with an interactive recurrent framework that leverages predictions
from a pre-trained OD model as the initial image labeling. After adding human
labels, the newly labeled images can be added into the training data to retrain
the OD model. With the same GUI, users can also deploy well-trained OD models
by loading the model weight file to detect new images. We used Ladder to
develop a deep learning model to access wheat stripe rust in RGB (red, green,
blue) images taken by an Unmanned Aerial Vehicle (UAV). Ladder employs OD to
directly evaluate different severity levels of wheat stripe rust in field
images, eliminating the need for photo stitching process for UAVs-based images.
The accuracy for low, medium and high severity scores were 72%, 50% and 80%,
respectively. This case demonstrates how Ladder empowers OD in precision
agriculture and crop breeding.
|
[
{
"version": "v1",
"created": "Sat, 17 Jun 2023 15:13:08 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Tang",
"Zhou",
""
],
[
"Zhang",
"Zhiwu",
""
]
] |
new_dataset
| 0.961529 |
2306.10392
|
Akshat Gupta
|
Akshat Gupta, Laxman Singh Tomar, Ridhima Garg
|
GlyphNet: Homoglyph domains dataset and detection using attention-based
Convolutional Neural Networks
| null |
AAAI AICS Conference 2023
| null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Cyber attacks deceive machines into believing something that does not exist
in the first place. However, there are some to which even humans fall prey. One
such famous attack that attackers have used over the years to exploit the
vulnerability of vision is known to be a Homoglyph attack. It employs a primary
yet effective mechanism to create illegitimate domains that are hard to
differentiate from legit ones. Moreover, as the difference is pretty
indistinguishable for a user to notice, they cannot stop themselves from
clicking on these homoglyph domain names. In many cases, that results in either
information theft or malware attack on their systems. Existing approaches use
simple, string-based comparison techniques applied in primary language-based
tasks. Although they are impactful to some extent, they usually fail because
they are not robust to different types of homoglyphs and are computationally
not feasible because of their time requirement proportional to the string
length. Similarly, neural network-based approaches are employed to determine
real domain strings from fake ones. Nevertheless, the problem with both methods
is that they require paired sequences of real and fake domain strings to work
with, which is often not the case in the real world, as the attacker only sends
the illegitimate or homoglyph domain to the vulnerable user. Therefore,
existing approaches are not suitable for practical scenarios in the real world.
In our work, we created GlyphNet, an image dataset that contains 4M domains,
both real and homoglyphs. Additionally, we introduce a baseline method for a
homoglyph attack detection system using an attention-based convolutional Neural
Network. We show that our model can reach state-of-the-art accuracy in
detecting homoglyph attacks with a 0.93 AUC on our dataset.
|
[
{
"version": "v1",
"created": "Sat, 17 Jun 2023 17:16:53 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Gupta",
"Akshat",
""
],
[
"Tomar",
"Laxman Singh",
""
],
[
"Garg",
"Ridhima",
""
]
] |
new_dataset
| 0.999819 |
2306.10413
|
Federica Barontini
|
F. Barontini, M.G. Catalano, S. Fani, G. Grioli, M. Bianchi, A. Bicchi
|
The CUFF, Clenching Upper-limb Force Feedback wearable device: design,
characterization and validation
|
12 pages, 11 figures, 2 table
| null | null | null |
cs.HC cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents the design, characterization and validation of a wearable
haptic device able to convey skin stretch, force feedback, and a combination of
both, to the user's arm. In this work, we carried out physical and perceptual
characterization with eleven able-bodied participants as well as two
experiments of discrimination and manipulation task hiring a total of 32
participants. In both the experiments the CUFF was used in conjunction with the
Pisa/IIT SoftHand. The first experiment was a discrimination task where the
subjects had to recognize the dimension and the softness between pair of
cylinder. in the second experiment the subjects were asked to control the
robotic hand for grasping objects. After the experiments the subjects underwent
to a subjective evaluation of the device. Results of the experiments and
questionnaire showed the effectiveness of the proposed device. Thank to its
versatility and structure, the device could be a viable solution for
teleoperation application, guidance and rehabilitation tasks, including
prosthesis applications.
|
[
{
"version": "v1",
"created": "Sat, 17 Jun 2023 19:37:36 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Barontini",
"F.",
""
],
[
"Catalano",
"M. G.",
""
],
[
"Fani",
"S.",
""
],
[
"Grioli",
"G.",
""
],
[
"Bianchi",
"M.",
""
],
[
"Bicchi",
"A.",
""
]
] |
new_dataset
| 0.99881 |
2306.10477
|
Jiahu Qin
|
Jianmin Qin, Jiahu Qin, Jiaxin Qiu, Qingchen Liu, Man Li, Qichao Ma
|
SRL-ORCA: A Socially Aware Multi-Agent Mapless Navigation Algorithm In
Complex Dynamic Scenes
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
For real-world navigation, it is important to endow robots with the
capabilities to navigate safely and efficiently in a complex environment with
both dynamic and non-convex static obstacles. However, achieving path-finding
in non-convex complex environments without maps as well as enabling multiple
robots to follow social rules for obstacle avoidance remains challenging
problems. In this letter, we propose a socially aware robot mapless navigation
algorithm, namely Safe Reinforcement Learning-Optimal Reciprocal Collision
Avoidance (SRL-ORCA). This is a multi-agent safe reinforcement learning
algorithm by using ORCA as an external knowledge to provide a safety guarantee.
This algorithm further introduces traffic norms of human society to improve
social comfort and achieve cooperative avoidance by following human social
customs. The result of experiments shows that SRL-ORCA learns strategies to
obey specific traffic rules. Compared to DRL, SRL-ORCA shows a significant
improvement in navigation success rate in different complex scenarios mixed
with the application of the same training network. SRL-ORCA is able to cope
with non-convex obstacle environments without falling into local minimal
regions and has a 14.1\% improvement in path quality (i.e., the average time to
target) compared to ORCA. Videos are available at https://youtu.be/huhXfCDkGws.
|
[
{
"version": "v1",
"created": "Sun, 18 Jun 2023 05:06:21 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Qin",
"Jianmin",
""
],
[
"Qin",
"Jiahu",
""
],
[
"Qiu",
"Jiaxin",
""
],
[
"Liu",
"Qingchen",
""
],
[
"Li",
"Man",
""
],
[
"Ma",
"Qichao",
""
]
] |
new_dataset
| 0.998596 |
2306.10621
|
Manos Kamarianakis
|
Manos Kamarianakis, Antonis Protopsaltis, Dimitris Angelis, Paul
Zikas, Mike Kentros, George Papagiannakis
|
UniSG^GA: A 3D scenegraph powered by Geometric Algebra unifying
geometry, behavior and GNNs towards generative AI
|
7 pages, 5 figures, A version of this paper was submitted to the
ENGAGE workshop of CGI 2023
| null | null | null |
cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents the introduction of UniSG^GA, a novel integrated
scenegraph structure, that to incorporates behavior and geometry data on a 3D
scene. It is specifically designed to seamlessly integrate Graph Neural
Networks (GNNs) and address the challenges associated with transforming a 3D
scenegraph (3D-SG) during generative tasks. To effectively capture and preserve
the topological relationships between objects in a simplified way, within the
graph representation, we propose UniSG^GA, that seamlessly integrates Geometric
Algebra (GA) forms. This novel approach enhances the overall performance and
capability of GNNs in handling generative and predictive tasks, opening up new
possibilities and aiming to lay the foundation for further exploration and
development of graph-based generative AI models that can effectively
incorporate behavior data for enhanced scene generation and synthesis.
|
[
{
"version": "v1",
"created": "Sun, 18 Jun 2023 19:01:56 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Kamarianakis",
"Manos",
""
],
[
"Protopsaltis",
"Antonis",
""
],
[
"Angelis",
"Dimitris",
""
],
[
"Zikas",
"Paul",
""
],
[
"Kentros",
"Mike",
""
],
[
"Papagiannakis",
"George",
""
]
] |
new_dataset
| 0.995447 |
2306.10675
|
Haomin Wen
|
Lixia Wu, Haomin Wen, Haoyuan Hu, Xiaowei Mao, Yutong Xia, Ergang
Shan, Jianbin Zhen, Junhong Lou, Yuxuan Liang, Liuqing Yang, Roger
Zimmermann, Youfang Lin, Huaiyu Wan
|
LaDe: The First Comprehensive Last-mile Delivery Dataset from Industry
| null | null | null | null |
cs.DB cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Real-world last-mile delivery datasets are crucial for research in logistics,
supply chain management, and spatio-temporal data mining. Despite a plethora of
algorithms developed to date, no widely accepted, publicly available last-mile
delivery dataset exists to support research in this field. In this paper, we
introduce \texttt{LaDe}, the first publicly available last-mile delivery
dataset with millions of packages from the industry. LaDe has three unique
characteristics: (1) Large-scale. It involves 10,677k packages of 21k couriers
over 6 months of real-world operation. (2) Comprehensive information. It offers
original package information, such as its location and time requirements, as
well as task-event information, which records when and where the courier is
while events such as task-accept and task-finish events happen. (3) Diversity.
The dataset includes data from various scenarios, including package pick-up and
delivery, and from multiple cities, each with its unique spatio-temporal
patterns due to their distinct characteristics such as populations. We verify
LaDe on three tasks by running several classical baseline models per task. We
believe that the large-scale, comprehensive, diverse feature of LaDe can offer
unparalleled opportunities to researchers in the supply chain community, data
mining community, and beyond. The dataset homepage is publicly available at
https://huggingface.co/datasets/Cainiao-AI/LaDe.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 02:30:28 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Wu",
"Lixia",
""
],
[
"Wen",
"Haomin",
""
],
[
"Hu",
"Haoyuan",
""
],
[
"Mao",
"Xiaowei",
""
],
[
"Xia",
"Yutong",
""
],
[
"Shan",
"Ergang",
""
],
[
"Zhen",
"Jianbin",
""
],
[
"Lou",
"Junhong",
""
],
[
"Liang",
"Yuxuan",
""
],
[
"Yang",
"Liuqing",
""
],
[
"Zimmermann",
"Roger",
""
],
[
"Lin",
"Youfang",
""
],
[
"Wan",
"Huaiyu",
""
]
] |
new_dataset
| 0.999893 |
2306.10727
|
Tomoki Sugimoto
|
Tomoki Sugimoto, Yasumasa Onoe, Hitomi Yanaka
|
Jamp: Controlled Japanese Temporal Inference Dataset for Evaluating
Generalization Capacity of Language Models
|
To appear in the Proceedings of the Association for Computational
Linguistics: Student Research Workshop (ACL-SRW 2023)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Natural Language Inference (NLI) tasks involving temporal inference remain
challenging for pre-trained language models (LMs). Although various datasets
have been created for this task, they primarily focus on English and do not
address the need for resources in other languages. It is unclear whether
current LMs realize the generalization capacity for temporal inference across
languages. In this paper, we present Jamp, a Japanese NLI benchmark focused on
temporal inference. Our dataset includes a range of temporal inference
patterns, which enables us to conduct fine-grained analysis. To begin the data
annotation process, we create diverse inference templates based on the formal
semantics test suites. We then automatically generate diverse NLI examples by
using the Japanese case frame dictionary and well-designed templates while
controlling the distribution of inference patterns and gold labels. We evaluate
the generalization capacities of monolingual/multilingual LMs by splitting our
dataset based on tense fragments (i.e., temporal inference patterns). Our
findings demonstrate that LMs struggle with specific linguistic phenomena, such
as habituality, indicating that there is potential for the development of more
effective NLI models across languages.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 07:00:14 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Sugimoto",
"Tomoki",
""
],
[
"Onoe",
"Yasumasa",
""
],
[
"Yanaka",
"Hitomi",
""
]
] |
new_dataset
| 0.999724 |
2306.10730
|
Qinghong Sun
|
Qinghong Sun, Yangguang Li, ZeXiang Liu, Xiaoshui Huang, Fenggang Liu,
Xihui Liu, Wanli Ouyang, Jing Shao
|
UniG3D: A Unified 3D Object Generation Dataset
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The field of generative AI has a transformative impact on various areas,
including virtual reality, autonomous driving, the metaverse, gaming, and
robotics. Among these applications, 3D object generation techniques are of
utmost importance. This technique has unlocked fresh avenues in the realm of
creating, customizing, and exploring 3D objects. However, the quality and
diversity of existing 3D object generation methods are constrained by the
inadequacies of existing 3D object datasets, including issues related to text
quality, the incompleteness of multi-modal data representation encompassing 2D
rendered images and 3D assets, as well as the size of the dataset. In order to
resolve these issues, we present UniG3D, a unified 3D object generation dataset
constructed by employing a universal data transformation pipeline on Objaverse
and ShapeNet datasets. This pipeline converts each raw 3D model into
comprehensive multi-modal data representation <text, image, point cloud, mesh>
by employing rendering engines and multi-modal models. These modules ensure the
richness of textual information and the comprehensiveness of data
representation. Remarkably, the universality of our pipeline refers to its
ability to be applied to any 3D dataset, as it only requires raw 3D data. The
selection of data sources for our dataset is based on their scale and quality.
Subsequently, we assess the effectiveness of our dataset by employing Point-E
and SDFusion, two widely recognized methods for object generation, tailored to
the prevalent 3D representations of point clouds and signed distance functions.
Our dataset is available at: https://unig3d.github.io.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 07:03:45 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Sun",
"Qinghong",
""
],
[
"Li",
"Yangguang",
""
],
[
"Liu",
"ZeXiang",
""
],
[
"Huang",
"Xiaoshui",
""
],
[
"Liu",
"Fenggang",
""
],
[
"Liu",
"Xihui",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Shao",
"Jing",
""
]
] |
new_dataset
| 0.999784 |
2306.10769
|
Isabelle Van Der Vegt
|
Isabelle van der Vegt
|
Gender Differences in Abuse: The Case of Dutch Politicians on Twitter
|
pre-print
| null | null | null |
cs.CL cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Online abuse and threats towards politicians have become a significant
concern in the Netherlands, like in many other countries across the world. This
paper analyses gender differences in abuse received by Dutch politicians on
Twitter, while taking into account the possible additional impact of ethnic
minority status. All tweets directed at party leaders throughout the entire
year of 2022 were collected. The effect of gender and ethnic minority status
were estimated for six different linguistic measures of abuse, namely,
toxicity, severe toxicity, identity attacks, profanity, insults, and threats.
Contrary to expectations, male politicians received higher levels of all forms
of abuse, with the exception of threats, for which no significant gender
difference was found. Significant interaction effects between gender and ethnic
minority status were found for a number of abuse measures. In the case of
severe toxicity, identity attacks, and profanity, female ethnic minority
politicians were more severely impacted than their ethnic majority female
colleagues, but not worse than male politicians. Finally, female ethnic
minority politicians received the highest levels of threats compared to all
groups. Given that online abuse and threats are reported to have a negative
effect on political participation and retention, these results are particularly
worrying.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 08:23:24 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"van der Vegt",
"Isabelle",
""
]
] |
new_dataset
| 0.986069 |
2306.10807
|
Joana Fonseca
|
Joana Fonseca
|
The Myth of Meritocracy and the Matilda Effect in STEM: Paper Acceptance
and Paper Citation
| null | null | null | null |
cs.DL physics.soc-ph
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Biases against women in the workplace have been documented in various
studies. There is also a growing body of literature on biases within academia.
But particularly in STEM, due to the heavily male-dominated field, studies
suggest that if one's gender is identifiable, women are more likely to get
their papers rejected and not cited as often as men. We propose two simple
modifications to tackle gender bias in STEM that can be applied to (but not
only) IEEE conferences and journals. Regarding paper acceptance, we propose a
double-blind review, and regarding paper citation, we propose one single letter
to identify the authors' first names, followed by their family names. We also
propose other modifications regarding gender bias in STEM and academia and
encourage further reforms supported by current research on this topic with
gender-segregated data.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 09:53:52 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Fonseca",
"Joana",
""
]
] |
new_dataset
| 0.961448 |
2306.10833
|
Elias Goldsztejn
|
Elias Goldsztejn, Tal Feiner, Ronen Brafman
|
PTDRL: Parameter Tuning using Deep Reinforcement Learning
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A variety of autonomous navigation algorithms exist that allow robots to move
around in a safe and fast manner. However, many of these algorithms require
parameter re-tuning when facing new environments. In this paper, we propose
PTDRL, a parameter-tuning strategy that adaptively selects from a fixed set of
parameters those that maximize the expected reward for a given navigation
system. Our learning strategy can be used for different environments, different
platforms, and different user preferences. Specifically, we attend to the
problem of social navigation in indoor spaces, using a classical motion
planning algorithm as our navigation system and training its parameters to
optimize its behavior. Experimental results show that PTDRL can outperform
other online parameter-tuning strategies.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 10:36:53 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Goldsztejn",
"Elias",
""
],
[
"Feiner",
"Tal",
""
],
[
"Brafman",
"Ronen",
""
]
] |
new_dataset
| 0.977895 |
2306.10843
|
Javier Naranjo-Alcazar
|
Javier Naranjo-Alcazar, Jordi Grau-Haro, David Almenar and Pedro
Zuccarello
|
Female mosquito detection by means of AI techniques inside release
containers in the context of a Sterile Insect Technique program
|
Under review at DCASE 2023 Workshop
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
The Sterile Insect Technique (SIT) is a biological pest control technique
based on the release into the environment of sterile males of the insect
species whose population is to be controlled. The entire SIT process involves
mass-rearing within a biofactory, sorting of the specimens by sex,
sterilization, and subsequent release of the sterile males into the
environment. The reason for avoiding the release of female specimens is
because, unlike males, females bite, with the subsequent risk of disease
transmission. In the case of Aedes mosquito biofactories for SIT, the key point
of the whole process is sex separation. This process is nowadays performed by a
combination of mechanical devices and AI-based vision systems. However, there
is still a possibility of false negatives, so a last stage of verification is
necessary before releasing them into the environment. It is known that the
sound produced by the flapping of adult male mosquitoes is different from that
produced by females, so this feature can be used to detect the presence of
females in containers prior to environmental release. This paper presents a
study for the detection of females in Aedes mosquito release vessels for SIT
programs. The containers used consist of PVC a tubular design of 8.8cm diameter
and 12.5cm height. The containers were placed in an experimental setup that
allowed the recording of the sound of mosquito flight inside of them. Each
container was filled with 250 specimens considering the cases of (i) only male
mosquitoes, (ii) only female mosquitoes, and (iii) 75% males and 25% females.
Case (i) was used for training and testing, whereas cases (ii) and (iii) were
used only for testing. Two algorithms were implemented for the detection of
female mosquitoes: an unsupervised outlier detection algorithm (iForest) and a
one-class SVM trained with male-only recordings.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 10:45:10 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Naranjo-Alcazar",
"Javier",
""
],
[
"Grau-Haro",
"Jordi",
""
],
[
"Almenar",
"David",
""
],
[
"Zuccarello",
"Pedro",
""
]
] |
new_dataset
| 0.992225 |
2306.10858
|
Ting Zhe
|
Ting Zhe, Yongqian Li, Jing Zhang, Yong Luo, Han Hu, Bo Du, Yonggang
Wen, Dacheng Tao
|
FHA-Kitchens: A Novel Dataset for Fine-Grained Hand Action Recognition
in Kitchen Scenes
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A typical task in the field of video understanding is hand action
recognition, which has a wide range of applications. Existing works either
mainly focus on full-body actions, or the defined action categories are
relatively coarse-grained. In this paper, we propose FHA-Kitchens, a novel
dataset of fine-grained hand actions in kitchen scenes. In particular, we focus
on human hand interaction regions and perform deep excavation to further refine
hand action information and interaction regions. Our FHA-Kitchens dataset
consists of 2,377 video clips and 30,047 images collected from 8 different
types of dishes, and all hand interaction regions in each image are labeled
with high-quality fine-grained action classes and bounding boxes. We represent
the action information in each hand interaction region as a triplet, resulting
in a total of 878 action triplets. Based on the constructed dataset, we
benchmark representative action recognition and detection models on the
following three tracks: (1) supervised learning for hand interaction region and
object detection, (2) supervised learning for fine-grained hand action
recognition, and (3) intra- and inter-class domain generalization for hand
interaction region detection. The experimental results offer compelling
empirical evidence that highlights the challenges inherent in fine-grained hand
action recognition, while also shedding light on potential avenues for future
research, particularly in relation to pre-training strategy, model design, and
domain generalization. The dataset will be released at
https://github.com/tingZ123/FHA-Kitchens.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 11:21:59 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Zhe",
"Ting",
""
],
[
"Li",
"Yongqian",
""
],
[
"Zhang",
"Jing",
""
],
[
"Luo",
"Yong",
""
],
[
"Hu",
"Han",
""
],
[
"Du",
"Bo",
""
],
[
"Wen",
"Yonggang",
""
],
[
"Tao",
"Dacheng",
""
]
] |
new_dataset
| 0.999865 |
2306.10865
|
Chandan Kumar Sheemar
|
Chandan Kumar Sheemar, George C. Alexandropoulos, Dirk Slock, Jorge
Querol, and Symeon Chatzinotas
|
Full-Duplex-Enabled Joint Communications and Sensing with Reconfigurable
Intelligent Surfaces
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The full-duplex (FD) technology has the potential to radically evolve
wireless systems, facilitating the integration of both communications and radar
functionalities into a single device, thus, enabling joint communication and
sensing (JCAS). In this paper, we present a novel approach for JCAS that
incorporates a reconfigurable intelligent surface (RIS) in the near-field of an
FD multiple-input multiple-output (MIMO) node, which is jointly optimized with
the digital beamformers to enable JSAC and efficiently handle self-interference
(SI). We propose a novel problem formulation for FD MIMO JCAS systems to
jointly minimize the total received power at the FD node's radar receiver while
maximizing the sum rate of downlink communications subject to a Cram\'{e}r-Rao
bound (CRB) constraint. In contrast to the typically used CRB in the relevant
literature, we derive a novel, more accurate, target estimation bound that
fully takes into account the RIS deployment. The considered problem is solved
using alternating optimization, which is guaranteed to converge to a local
optimum. The simulation results demonstrate that the proposed scheme achieves
significant performance improvement both for communications and sensing. It is
showcased that, jointly designing the FD MIMO beamformers and the RIS phase
configuration to be SI aware can significantly loosen the requirement for
additional SI cancellation.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 11:32:14 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Sheemar",
"Chandan Kumar",
""
],
[
"Alexandropoulos",
"George C.",
""
],
[
"Slock",
"Dirk",
""
],
[
"Querol",
"Jorge",
""
],
[
"Chatzinotas",
"Symeon",
""
]
] |
new_dataset
| 0.998138 |
2306.10900
|
Yaqi Zhang
|
Yaqi Zhang, Di Huang, Bin Liu, Shixiang Tang, Yan Lu, Lu Chen, Lei
Bai, Qi Chu, Nenghai Yu, Wanli Ouyang
|
MotionGPT: Finetuned LLMs are General-Purpose Motion Generators
|
18 pages, 8 figures
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generating realistic human motion from given action descriptions has
experienced significant advancements because of the emerging requirement of
digital humans. While recent works have achieved impressive results in
generating motion directly from textual action descriptions, they often support
only a single modality of the control signal, which limits their application in
the real digital human industry. This paper presents a Motion General-Purpose
generaTor (MotionGPT) that can use multimodal control signals, e.g., text and
single-frame poses, for generating consecutive human motions by treating
multimodal signals as special input tokens in large language models (LLMs).
Specifically, we first quantize multimodal control signals into discrete codes
and then formulate them in a unified prompt instruction to ask the LLMs to
generate the motion answer. Our MotionGPT demonstrates a unified human motion
generation model with multimodal control signals by tuning a mere 0.4% of LLM
parameters. To the best of our knowledge, MotionGPT is the first method to
generate human motion by multimodal control signals, which we hope can shed
light on this new direction. Codes shall be released upon acceptance.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 12:58:17 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Zhang",
"Yaqi",
""
],
[
"Huang",
"Di",
""
],
[
"Liu",
"Bin",
""
],
[
"Tang",
"Shixiang",
""
],
[
"Lu",
"Yan",
""
],
[
"Chen",
"Lu",
""
],
[
"Bai",
"Lei",
""
],
[
"Chu",
"Qi",
""
],
[
"Yu",
"Nenghai",
""
],
[
"Ouyang",
"Wanli",
""
]
] |
new_dataset
| 0.998798 |
2306.10924
|
Yi Geng
|
Yi Geng
|
Diagonal Waveform and Algorithm to Estimate Range and Velocity in
Multi-Object Scenarios
|
This paper has been accepted by 97th Vehicular Technology Conference,
2023
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Waveform design for joint communication and sensing (JCAS) is an important
research direction, focusing on providing an optimal tradeoff between
communication and sensing performance. In this paper, we first describe the
conventional grid-type waveform structure and the corresponding two-dimension
(2D)-discrete Fourier transform (DFT) algorithm. We then introduce an emerging
diagonal scheme, including a diagonal waveform structure and corresponding
1D-DFT diagonal algorithm. The diagonal scheme substantially reduces the
signaling overhead and computational complexity compared to the conventional
2D-DFT algorithm while still achieving the same radar performance. But the
previous study of diagonal waveform used a single target to evaluate the
performance of the diagonal scheme. This paper verifies the diagonal waveform
with simulations demonstrating its feasibility in a traffic monitoring scenario
with multiple vehicles.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 13:33:56 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Geng",
"Yi",
""
]
] |
new_dataset
| 0.980295 |
2306.10926
|
Maxim Vochten
|
Maxim Vochten, Ali Mousavi Mohammadi, Arno Verduyn, Tinne De Laet,
Erwin Aertbeli\"en, Joris De Schutter
|
Invariant Descriptors of Motion and Force Trajectories for Interpreting
Object Manipulation Tasks in Contact
|
18 pages, 9 figures. Submitted to IEEE Transactions on Robotics
(January 6, 2023)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Invariant descriptors of point and rigid-body motion trajectories have been
proposed in the past as representative task models for motion recognition and
generalization. Currently, no invariant descriptor exists for representing
force trajectories which appear in contact tasks. This paper introduces
invariant descriptors for force trajectories by exploiting the duality between
motion and force. Two types of invariant descriptors are presented depending on
whether the trajectories consist of screw coordinates or vector coordinates.
Methods and software are provided for robustly calculating the invariant
descriptors from noisy measurements using optimal control. Using experimental
human demonstrations of a 3D contour following task, invariant descriptors are
shown to result in task representations that do not depend on the calibration
of reference frames or sensor locations. Tuning of the optimal control problems
is shown to be fast and intuitive. Similarly as for motions in free space, the
proposed invariant descriptors for motion and force trajectories may prove
useful for the recognition and generalization of constrained motions such as
during object manipulation in contact.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 13:36:17 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Vochten",
"Maxim",
""
],
[
"Mohammadi",
"Ali Mousavi",
""
],
[
"Verduyn",
"Arno",
""
],
[
"De Laet",
"Tinne",
""
],
[
"Aertbeliën",
"Erwin",
""
],
[
"De Schutter",
"Joris",
""
]
] |
new_dataset
| 0.999014 |
2306.10945
|
Zhanyu Liu
|
Zhanyu Liu, Chumeng Liang, Guanjie Zheng, Hua Wei
|
FDTI: Fine-grained Deep Traffic Inference with Roadnet-enriched Graph
|
Accepted by ECML PKDD 2023 (ADS track)
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes the fine-grained traffic prediction task (e.g. interval
between data points is 1 minute), which is essential to traffic-related
downstream applications. Under this setting, traffic flow is highly influenced
by traffic signals and the correlation between traffic nodes is dynamic. As a
result, the traffic data is non-smooth between nodes, and hard to utilize
previous methods which focus on smooth traffic data. To address this problem,
we propose Fine-grained Deep Traffic Inference, termed as FDTI. Specifically,
we construct a fine-grained traffic graph based on traffic signals to model the
inter-road relations. Then, a physically-interpretable dynamic mobility
convolution module is proposed to capture vehicle moving dynamics controlled by
the traffic signals. Furthermore, traffic flow conservation is introduced to
accurately infer future volume. Extensive experiments demonstrate that our
method achieves state-of-the-art performance and learned traffic dynamics with
good properties. To the best of our knowledge, we are the first to conduct the
city-level fine-grained traffic prediction.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 14:03:42 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Liu",
"Zhanyu",
""
],
[
"Liang",
"Chumeng",
""
],
[
"Zheng",
"Guanjie",
""
],
[
"Wei",
"Hua",
""
]
] |
new_dataset
| 0.99905 |
2306.10963
|
Jens Bayer
|
Jens Bayer and Stefan Becker and David M\"unch and Michael Arens
|
Eigenpatches -- Adversarial Patches from Principal Components
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adversarial patches are still a simple yet powerful white box attack that can
be used to fool object detectors by suppressing possible detections. The
patches of these so-called evasion attacks are computational expensive to
produce and require full access to the attacked detector. This paper addresses
the problem of computational expensiveness by analyzing 375 generated patches,
calculating the principal components of these and show, that linear
combinations of the resulting "eigenpatches" can be used to fool object
detections successfully.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 14:27:07 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Bayer",
"Jens",
""
],
[
"Becker",
"Stefan",
""
],
[
"Münch",
"David",
""
],
[
"Arens",
"Michael",
""
]
] |
new_dataset
| 0.995554 |
2306.10998
|
Disha Shrivastava
|
Disha Shrivastava, Denis Kocetkov, Harm de Vries, Dzmitry Bahdanau,
Torsten Scholak
|
RepoFusion: Training Code Models to Understand Your Repository
| null | null | null | null |
cs.LG cs.AI cs.PL cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the huge success of Large Language Models (LLMs) in coding assistants
like GitHub Copilot, these models struggle to understand the context present in
the repository (e.g., imports, parent classes, files with similar names, etc.),
thereby producing inaccurate code completions. This effect is more pronounced
when using these assistants for repositories that the model has not seen during
training, such as proprietary software or work-in-progress code projects.
Recent work has shown the promise of using context from the repository during
inference. In this work, we extend this idea and propose RepoFusion, a
framework to train models to incorporate relevant repository context.
Experiments on single-line code completion show that our models trained with
repository context significantly outperform much larger code models as
CodeGen-16B-multi ($\sim73\times$ larger) and closely match the performance of
the $\sim 70\times$ larger StarCoderBase model that was trained with the
Fill-in-the-Middle objective. We find these results to be a novel and
compelling demonstration of the gains that training with repository context can
bring. We carry out extensive ablation studies to investigate the impact of
design choices such as context type, number of contexts, context length, and
initialization within our framework. Lastly, we release Stack-Repo, a dataset
of 200 Java repositories with permissive licenses and near-deduplicated files
that are augmented with three types of repository contexts. Additionally, we
are making available the code and trained checkpoints for our work. Our
released resources can be found at \url{https://huggingface.co/RepoFusion}.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 15:05:31 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Shrivastava",
"Disha",
""
],
[
"Kocetkov",
"Denis",
""
],
[
"de Vries",
"Harm",
""
],
[
"Bahdanau",
"Dzmitry",
""
],
[
"Scholak",
"Torsten",
""
]
] |
new_dataset
| 0.994351 |
2306.11011
|
Wenhao Wang
|
Xiangyi Xu, Wenhao Wang, Yongzheng Wu, Zhennan Min, Zixuan Pang, Yier
Jin
|
virtCCA: Virtualized Arm Confidential Compute Architecture with
TrustZone
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
ARM introduces the Confidential Compute Architecture (CCA) in the forthcoming
ARMv9-A architecture recently. CCA enables the support of confidential virtual
machines (cVMs) within a separated world (known as the Realm world), protected
from the untrusted normal world. While CCA points to a convincing future of
confidential computing, it is foreseen that the CCA hardware will not be
available soon according to ARM's roadmap. Upon this request, we present
\textit{virtCCA}, an architecture that facilitates virtualized CCA using
TrustZone, a mature hardware feature on existing ARM platforms. Specifically,
we use the Secure EL2 (S-EL2) extension introduced since ARMv8.4 to support the
memory isolation among the cVMs. We introduce direct shadow memory mapping --
an efficient memory protection scheme -- to overcome the limitations of
existing hardware. virtCCA is compatible with the CCA specifications at the API
level, and we build the entire CCA software and firmware stack atop virtCCA,
including the TrustZone Management Monitor (TMM) for enforcing isolation among
cVMs and supporting cVM life cycle management, as well as the enhancement of
the normal world KVM for support of cVMs. We implemented virtCCA on both QEMU
and ARM Fixed Virtual Platform (FVP). The evaluation on micro-benchmarks and
macro-benchmarks shows that the overhead of running cVMs is acceptable,
compared with the counterpart of running normal world VMs. On a set of
real-world workloads the overhead is less than 8%, with the worst case of 17%
for I/O intensive workloads.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 15:19:50 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Xu",
"Xiangyi",
""
],
[
"Wang",
"Wenhao",
""
],
[
"Wu",
"Yongzheng",
""
],
[
"Min",
"Zhennan",
""
],
[
"Pang",
"Zixuan",
""
],
[
"Jin",
"Yier",
""
]
] |
new_dataset
| 0.996742 |
2306.11013
|
David Rodr\'iguez-Mart\'inez
|
Rom\'eo Tonasso, Daniel Tataru, Hippolyte Rauch, Vincent Pozsgay,
Thomas Pfeiffer, Erik Uythoven, David Rodr\'iguez-Mart\'inez
|
A lunar reconnaissance drone for cooperative exploration and
high-resolution mapping of extreme locations
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
An efficient characterization of scientifically significant locations is
essential prior to the return of humans to the Moon. The highest resolution
imagery acquired from orbit of south-polar shadowed regions and other relevant
locations remains, at best, an order of magnitude larger than the
characteristic length of most of the robotic systems to be deployed. This
hinders the planning and successful implementation of prospecting missions and
poses a high risk for the traverse of robots and humans, diminishing the
potential overall scientific and commercial return of any mission. We herein
present the design of a lightweight, compact, autonomous, and reusable lunar
reconnaissance drone capable of assisting other ground-based robotic assets,
and eventually humans, in the characterization and high-resolution mapping
(~0.1 m/px) of particularly challenging and hard-to-access locations on the
lunar surface. The proposed concept consists of two main subsystems: the drone
and its service station. With a total combined wet mass of 100 kg, the system
is capable of 11 flights without refueling the service station, enabling almost
9 km of accumulated flight distance. The deployment of such a system could
significantly impact the efficiency of upcoming exploration missions,
increasing the distance covered per day of exploration and significantly
reducing the need for recurrent contacts with ground stations on Earth.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 15:23:41 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Tonasso",
"Roméo",
""
],
[
"Tataru",
"Daniel",
""
],
[
"Rauch",
"Hippolyte",
""
],
[
"Pozsgay",
"Vincent",
""
],
[
"Pfeiffer",
"Thomas",
""
],
[
"Uythoven",
"Erik",
""
],
[
"Rodríguez-Martínez",
"David",
""
]
] |
new_dataset
| 0.998198 |
2306.11027
|
Kun Zhou
|
Wayne Xin Zhao, Kun Zhou, Beichen Zhang, Zheng Gong, Zhipeng Chen,
Yuanhang Zhou, Ji-Rong Wen, Jing Sha, Shijin Wang, Cong Liu, Guoping Hu
|
JiuZhang 2.0: A Unified Chinese Pre-trained Language Model for
Multi-task Mathematical Problem Solving
|
Accepted by KDD 2023 ADS track, the 2.0 version of JiuZhang
(arxiv:2206.06315v1)
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although pre-trained language models~(PLMs) have recently advanced the
research progress in mathematical reasoning, they are not specially designed as
a capable multi-task solver, suffering from high cost for multi-task deployment
(\eg a model copy for a task) and inferior performance on complex mathematical
problems in practical applications. To address these issues, in this paper, we
propose \textbf{JiuZhang~2.0}, a unified Chinese PLM specially for multi-task
mathematical problem solving. Our idea is to maintain a moderate-sized model
and employ the \emph{cross-task knowledge sharing} to improve the model
capacity in a multi-task setting. Specially, we construct a
Mixture-of-Experts~(MoE) architecture for modeling mathematical text, so as to
capture the common mathematical knowledge across tasks. For optimizing the MoE
architecture, we design \emph{multi-task continual pre-training} and
\emph{multi-task fine-tuning} strategies for multi-task adaptation. These
training strategies can effectively decompose the knowledge from the task data
and establish the cross-task sharing via expert networks. In order to further
improve the general capacity of solving different complex tasks, we leverage
large language models~(LLMs) as complementary models to iteratively refine the
generated solution by our PLM, via in-context learning. Extensive experiments
have demonstrated the effectiveness of our model.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 15:45:36 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Zhao",
"Wayne Xin",
""
],
[
"Zhou",
"Kun",
""
],
[
"Zhang",
"Beichen",
""
],
[
"Gong",
"Zheng",
""
],
[
"Chen",
"Zhipeng",
""
],
[
"Zhou",
"Yuanhang",
""
],
[
"Wen",
"Ji-Rong",
""
],
[
"Sha",
"Jing",
""
],
[
"Wang",
"Shijin",
""
],
[
"Liu",
"Cong",
""
],
[
"Hu",
"Guoping",
""
]
] |
new_dataset
| 0.998456 |
2306.11148
|
Lenore Mullin
|
Lenore M. R. Mullin
|
From array algebra to energy efficiency on GPUs: Data and hardware
shapes with dimension-lifting to optimize memory-processor layouts
|
9 pages, 12 figures
| null | null | null |
cs.DC cs.MS
|
http://creativecommons.org/licenses/by/4.0/
|
We present a new formulation for parallel matrix multiplication (MM) to
out-perform the standard row-column code design. This algorithm is formulated
in the MoA formalism (A Mathematics of Arrays) and combines an array view of
hardware (dimension-lifting) to extend indexing to physical memory/processing
units, with a contiguous data layout derived from static transformations. This
view of a hardware-software model is thus a bridging model in the sense of
Valiant's BSP. OpenACCcode was derived from the MoA expressions's normal form,
producing optimal block sizes using the static information of types and shapes.
Experiments were run on Nvidia V100 GPUs and reveal energy consumption which is
quadratic in N, i.e. linear in the size of matrix. More generally this approach
may be an ideal way of formulating, optimizing, and mapping array algorithms to
embedded hardware. This work builds upon recently published results of NREL
scientists.
.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 20:10:23 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Mullin",
"Lenore M. R.",
""
]
] |
new_dataset
| 0.998425 |
2306.11164
|
Paula Romero Jure
|
Paula V. Romero Jure and Juan Bautista Cabral and Sergio Masuelli
|
ETL for the integration of remote sensing data
|
8 pages, 3 figures. Submitted to SAIV 2023 - Simposio Argentino de
Im\'agenes y Visi\'on
| null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Modern in-orbit satellites and other available remote sensing tools have
generated a huge availability of public data waiting to be exploited in
different formats hosted on different servers. In this context, ETL formalism
becomes relevant for the integration and analysis of the combined information
from all these sources. Throughout this work, we present the theoretical and
practical foundations to build a modular analysis infrastructure that allows
the creation of ETLs to download, transform and integrate data coming from
different instruments in different formats. Part of this work is already
implemented in a Python library which is intended to be integrated into already
available workflow management tools based on acyclic-directed graphs which also
have different adapters to impact the combined data in different warehouses.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 21:10:38 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Jure",
"Paula V. Romero",
""
],
[
"Cabral",
"Juan Bautista",
""
],
[
"Masuelli",
"Sergio",
""
]
] |
new_dataset
| 0.992065 |
2306.11203
|
Elysia Smyers
|
Elysia Q. Smyers, Sydney M. Katz, Anthony L. Corso and Mykel J.
Kochenderfer
|
AVOIDDS: Aircraft Vision-based Intruder Detection Dataset and Simulator
|
Submitted to the NeurIPS 2023 Datasets and Benchmarks Track
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Designing robust machine learning systems remains an open problem, and there
is a need for benchmark problems that cover both environmental changes and
evaluation on a downstream task. In this work, we introduce AVOIDDS, a
realistic object detection benchmark for the vision-based aircraft
detect-and-avoid problem. We provide a labeled dataset consisting of 72,000
photorealistic images of intruder aircraft with various lighting conditions,
weather conditions, relative geometries, and geographic locations. We also
provide an interface that evaluates trained models on slices of this dataset to
identify changes in performance with respect to changing environmental
conditions. Finally, we implement a fully-integrated, closed-loop simulator of
the vision-based detect-and-avoid problem to evaluate trained models with
respect to the downstream collision avoidance task. This benchmark will enable
further research in the design of robust machine learning systems for use in
safety-critical applications. The AVOIDDS dataset and code are publicly
available at
$\href{https://purl.stanford.edu/hj293cv5980}{purl.stanford.edu/hj293cv5980}$
and
$\href{https://github.com/sisl/VisionBasedAircraftDAA}{github.com/sisl/VisionBasedAircraftDAA}$,
respectively.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 23:58:07 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Smyers",
"Elysia Q.",
""
],
[
"Katz",
"Sydney M.",
""
],
[
"Corso",
"Anthony L.",
""
],
[
"Kochenderfer",
"Mykel J.",
""
]
] |
new_dataset
| 0.999388 |
2306.11247
|
Alicia Parrish
|
Lora Aroyo, Alex S. Taylor, Mark Diaz, Christopher M. Homan, Alicia
Parrish, Greg Serapio-Garcia, Vinodkumar Prabhakaran, Ding Wang
|
DICES Dataset: Diversity in Conversational AI Evaluation for Safety
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Machine learning approaches often require training and evaluation datasets
with a clear separation between positive and negative examples. This risks
simplifying and even obscuring the inherent subjectivity present in many tasks.
Preserving such variance in content and diversity in datasets is often
expensive and laborious. This is especially troubling when building safety
datasets for conversational AI systems, as safety is both socially and
culturally situated. To demonstrate this crucial aspect of conversational AI
safety, and to facilitate in-depth model performance analyses, we introduce the
DICES (Diversity In Conversational AI Evaluation for Safety) dataset that
contains fine-grained demographic information about raters, high replication of
ratings per item to ensure statistical power for analyses, and encodes rater
votes as distributions across different demographics to allow for in-depth
explorations of different aggregation strategies. In short, the DICES dataset
enables the observation and measurement of variance, ambiguity, and diversity
in the context of conversational AI safety. We also illustrate how the dataset
offers a basis for establishing metrics to show how raters' ratings can
intersects with demographic categories such as racial/ethnic groups, age
groups, and genders. The goal of DICES is to be used as a shared resource and
benchmark that respects diverse perspectives during safety evaluation of
conversational AI systems.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 03:00:12 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Aroyo",
"Lora",
""
],
[
"Taylor",
"Alex S.",
""
],
[
"Diaz",
"Mark",
""
],
[
"Homan",
"Christopher M.",
""
],
[
"Parrish",
"Alicia",
""
],
[
"Serapio-Garcia",
"Greg",
""
],
[
"Prabhakaran",
"Vinodkumar",
""
],
[
"Wang",
"Ding",
""
]
] |
new_dataset
| 0.999893 |
2306.11249
|
Cheng Tan
|
Cheng Tan, Siyuan Li, Zhangyang Gao, Wenfei Guan, Zedong Wang, Zicheng
Liu, Lirong Wu, Stan Z. Li
|
OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning
|
33 pages, 17 figures, 19 tables. Under review. For more details,
please refer to https://github.com/chengtan9907/OpenSTL
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spatio-temporal predictive learning is a learning paradigm that enables
models to learn spatial and temporal patterns by predicting future frames from
given past frames in an unsupervised manner. Despite remarkable progress in
recent years, a lack of systematic understanding persists due to the diverse
settings, complex implementation, and difficult reproducibility. Without
standardization, comparisons can be unfair and insights inconclusive. To
address this dilemma, we propose OpenSTL, a comprehensive benchmark for
spatio-temporal predictive learning that categorizes prevalent approaches into
recurrent-based and recurrent-free models. OpenSTL provides a modular and
extensible framework implementing various state-of-the-art methods. We conduct
standard evaluations on datasets across various domains, including synthetic
moving object trajectory, human motion, driving scenes, traffic flow and
weather forecasting. Based on our observations, we provide a detailed analysis
of how model architecture and dataset properties affect spatio-temporal
predictive learning performance. Surprisingly, we find that recurrent-free
models achieve a good balance between efficiency and performance than recurrent
models. Thus, we further extend the common MetaFormers to boost recurrent-free
spatial-temporal predictive learning. We open-source the code and models at
https://github.com/chengtan9907/OpenSTL.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 03:02:14 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Tan",
"Cheng",
""
],
[
"Li",
"Siyuan",
""
],
[
"Gao",
"Zhangyang",
""
],
[
"Guan",
"Wenfei",
""
],
[
"Wang",
"Zedong",
""
],
[
"Liu",
"Zicheng",
""
],
[
"Wu",
"Lirong",
""
],
[
"Li",
"Stan Z.",
""
]
] |
new_dataset
| 0.996097 |
2306.11256
|
Yang Janet Liu
|
Yang Janet Liu and Amir Zeldes
|
GUMSum: Multi-Genre Data and Evaluation for English Abstractive
Summarization
|
Accepted to the Findings of ACL 2023; camera-ready version
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic summarization with pre-trained language models has led to
impressively fluent results, but is prone to 'hallucinations', low performance
on non-news genres, and outputs which are not exactly summaries. Targeting ACL
2023's 'Reality Check' theme, we present GUMSum, a small but carefully crafted
dataset of English summaries in 12 written and spoken genres for evaluation of
abstractive summarization. Summaries are highly constrained, focusing on
substitutive potential, factuality, and faithfulness. We present guidelines and
evaluate human agreement as well as subjective judgments on recent system
outputs, comparing general-domain untuned approaches, a fine-tuned one, and a
prompt-based approach, to human performance. Results show that while GPT3
achieves impressive scores, it still underperforms humans, with varying quality
across genres. Human judgments reveal different types of errors in supervised,
prompted, and human-generated summaries, shedding light on the challenges of
producing a good summary.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 03:21:10 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Liu",
"Yang Janet",
""
],
[
"Zeldes",
"Amir",
""
]
] |
new_dataset
| 0.999326 |
2306.11301
|
Zixuan Wu
|
Zixuan Wu, Sean Ye, Manisha Natarajan, Letian Chen, Rohan Paleja,
Matthew C. Gombolay
|
Adversarial Search and Track with Multiagent Reinforcement Learning in
Sparsely Observable Environment
|
Submitted to IEEE/RSJ International Conference on Intelligent Robots
(IROS) 2023
| null | null | null |
cs.LG cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study a search and tracking (S&T) problem for a team of dynamic search
agents to capture an adversarial evasive agent with only sparse temporal and
spatial knowledge of its location in this paper. The domain is challenging for
traditional Reinforcement Learning (RL) approaches as the large space leads to
sparse observations of the adversary and in turn sparse rewards for the search
agents. Additionally, the opponent's behavior is reactionary to the search
agents, which causes a data distribution shift for RL during training as search
agents improve their policies. We propose a differentiable Multi-Agent RL
(MARL) architecture that utilizes a novel filtering module to supplement
estimated adversary location information and enables the effective learning of
a team policy. Our algorithm learns how to balance information from prior
knowledge and a motion model to remain resilient to the data distribution shift
and outperforms all baseline methods with a 46% increase of detection rate.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 05:31:13 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Wu",
"Zixuan",
""
],
[
"Ye",
"Sean",
""
],
[
"Natarajan",
"Manisha",
""
],
[
"Chen",
"Letian",
""
],
[
"Paleja",
"Rohan",
""
],
[
"Gombolay",
"Matthew C.",
""
]
] |
new_dataset
| 0.996412 |
2306.11326
|
Mitchell Rogers
|
Mitchell Rogers, Ga\"el Gendron, David Arturo Soriano Valdez, Mihailo
Azhar, Yang Chen, Shahrokh Heidari, Caleb Perelini, Padriac O'Leary, Kobe
Knowles, Izak Tait, Simon Eyre, Michael Witbrock, and Patrice Delmas
|
Meerkat Behaviour Recognition Dataset
|
Presented as a poster for the CV4Animals Workshop, CVPR 2023. For
associated dataset see: https://meerkat-dataset.github.io/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recording animal behaviour is an important step in evaluating the well-being
of animals and further understanding the natural world. Current methods for
documenting animal behaviour within a zoo setting, such as scan sampling,
require excessive human effort, are unfit for around-the-clock monitoring, and
may produce human-biased results. Several animal datasets already exist that
focus predominantly on wildlife interactions, with some extending to action or
behaviour recognition. However, there is limited data in a zoo setting or data
focusing on the group behaviours of social animals. We introduce a large
meerkat (Suricata Suricatta) behaviour recognition video dataset with diverse
annotated behaviours, including group social interactions, tracking of
individuals within the camera view, skewed class distribution, and varying
illumination conditions. This dataset includes videos from two positions within
the meerkat enclosure at the Wellington Zoo (Wellington, New Zealand), with
848,400 annotated frames across 20 videos and 15 unannotated videos.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 06:50:50 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Rogers",
"Mitchell",
""
],
[
"Gendron",
"Gaël",
""
],
[
"Valdez",
"David Arturo Soriano",
""
],
[
"Azhar",
"Mihailo",
""
],
[
"Chen",
"Yang",
""
],
[
"Heidari",
"Shahrokh",
""
],
[
"Perelini",
"Caleb",
""
],
[
"O'Leary",
"Padriac",
""
],
[
"Knowles",
"Kobe",
""
],
[
"Tait",
"Izak",
""
],
[
"Eyre",
"Simon",
""
],
[
"Witbrock",
"Michael",
""
],
[
"Delmas",
"Patrice",
""
]
] |
new_dataset
| 0.999741 |
2306.11341
|
Willy Fitra Hendria
|
Willy Fitra Hendria
|
MSVD-Indonesian: A Benchmark for Multimodal Video-Text Tasks in
Indonesian
|
13 pages, 5 figures, 5 tables
| null | null | null |
cs.MM cs.CL cs.CV cs.LG eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Multimodal learning on video and text data has been receiving growing
attention from many researchers in various research tasks, including
text-to-video retrieval, video-to-text retrieval, and video captioning.
Although many algorithms have been proposed for those challenging tasks, most
of them are developed on English language datasets. Despite Indonesian being
one of the most spoken languages in the world, the research progress on the
multimodal video-text with Indonesian sentences is still under-explored, likely
due to the absence of the public benchmark dataset. To address this issue, we
construct the first public Indonesian video-text dataset by translating English
sentences from the MSVD dataset to Indonesian sentences. Using our dataset, we
then train neural network models which were developed for the English
video-text dataset on three tasks, i.e., text-to-video retrieval, video-to-text
retrieval, and video captioning. The recent neural network-based approaches to
video-text tasks often utilized a feature extractor that is primarily
pretrained on an English vision-language dataset. Since the availability of the
pretraining resources with Indonesian sentences is relatively limited, the
applicability of those approaches to our dataset is still questionable. To
overcome the lack of pretraining resources, we apply cross-lingual transfer
learning by utilizing the feature extractors pretrained on the English dataset,
and we then fine-tune the models on our Indonesian dataset. Our experimental
results show that this approach can help to improve the performance for the
three tasks on all metrics. Finally, we discuss potential future works using
our dataset, inspiring further research in the Indonesian multimodal video-text
tasks. We believe that our dataset and our experimental results could provide
valuable contributions to the community. Our dataset is available on GitHub.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 07:19:36 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Hendria",
"Willy Fitra",
""
]
] |
new_dataset
| 0.999778 |
2306.11345
|
Zhongzhen Huang
|
Zhongzhen Huang, Xiaofan Zhang, Shaoting Zhang
|
KiUT: Knowledge-injected U-Transformer for Radiology Report Generation
| null | null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Radiology report generation aims to automatically generate a clinically
accurate and coherent paragraph from the X-ray image, which could relieve
radiologists from the heavy burden of report writing. Although various image
caption methods have shown remarkable performance in the natural image field,
generating accurate reports for medical images requires knowledge of multiple
modalities, including vision, language, and medical terminology. We propose a
Knowledge-injected U-Transformer (KiUT) to learn multi-level visual
representation and adaptively distill the information with contextual and
clinical knowledge for word prediction. In detail, a U-connection schema
between the encoder and decoder is designed to model interactions between
different modalities. And a symptom graph and an injected knowledge distiller
are developed to assist the report generation. Experimentally, we outperform
state-of-the-art methods on two widely used benchmark datasets: IU-Xray and
MIMIC-CXR. Further experimental results prove the advantages of our
architecture and the complementary benefits of the injected knowledge.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 07:27:28 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Huang",
"Zhongzhen",
""
],
[
"Zhang",
"Xiaofan",
""
],
[
"Zhang",
"Shaoting",
""
]
] |
new_dataset
| 0.990651 |
2306.11346
|
Yu Zheng
|
Guangming Wang, Yu Zheng, Yanfeng Guo, Zhe Liu, Yixiang Zhu, Wolfram
Burgard, and Hesheng Wang
|
End-to-end 2D-3D Registration between Image and LiDAR Point Cloud for
Vehicle Localization
|
18 pages, 14 figures, under review
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robot localization using a previously built map is essential for a variety of
tasks including highly accurate navigation and mobile manipulation. A popular
approach to robot localization is based on image-to-point cloud registration,
which combines illumination-invariant LiDAR-based mapping with economical
image-based localization. However, the recent works for image-to-point cloud
registration either divide the registration into separate modules or project
the point cloud to the depth image to register the RGB and depth images. In
this paper, we present I2PNet, a novel end-to-end 2D-3D registration network.
I2PNet directly registers the raw 3D point cloud with the 2D RGB image using
differential modules with a unique target. The 2D-3D cost volume module for
differential 2D-3D association is proposed to bridge feature extraction and
pose regression. 2D-3D cost volume module implicitly constructs the soft
point-to-pixel correspondence on the intrinsic-independent normalized plane of
the pinhole camera model. Moreover, we introduce an outlier mask prediction
module to filter the outliers in the 2D-3D association before pose regression.
Furthermore, we propose the coarse-to-fine 2D-3D registration architecture to
increase localization accuracy. We conduct extensive localization experiments
on the KITTI Odometry and nuScenes datasets. The results demonstrate that
I2PNet outperforms the state-of-the-art by a large margin. In addition, I2PNet
has a higher efficiency than the previous works and can perform the
localization in real-time. Moreover, we extend the application of I2PNet to the
camera-LiDAR online calibration and demonstrate that I2PNet outperforms recent
approaches on the online calibration task.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 07:28:40 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Wang",
"Guangming",
""
],
[
"Zheng",
"Yu",
""
],
[
"Guo",
"Yanfeng",
""
],
[
"Liu",
"Zhe",
""
],
[
"Zhu",
"Yixiang",
""
],
[
"Burgard",
"Wolfram",
""
],
[
"Wang",
"Hesheng",
""
]
] |
new_dataset
| 0.984921 |
2306.11390
|
Haris Bin Zia
|
Haris Bin Zia, Ehsan Ul Haq, Ignacio Castro, Pan Hui, Gareth Tyson
|
An Analysis of Twitter Discourse on the War Between Russia and Ukraine
| null | null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
On the 21st of February 2022, Russia recognised the Donetsk People's Republic
and the Luhansk People's Republic, three days before launching an invasion of
Ukraine. Since then, an active debate has taken place on social media, mixing
organic discussions with coordinated information campaigns. The scale of this
discourse, alongside the role that information warfare has played in the
invasion, make it vital to better understand this ecosystem. We therefore
present a study of pro-Ukrainian vs. pro-Russian discourse through the lens of
Twitter. We do so from two perspectives: (i) the content that is shared; and
(ii) the users who participate in the sharing. We first explore the scale and
nature of conversations, including analysis of hashtags, toxicity and media
sharing. We then study the users who drive this, highlighting a significant
presence of new users and bots.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 08:57:17 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Zia",
"Haris Bin",
""
],
[
"Haq",
"Ehsan Ul",
""
],
[
"Castro",
"Ignacio",
""
],
[
"Hui",
"Pan",
""
],
[
"Tyson",
"Gareth",
""
]
] |
new_dataset
| 0.976105 |
2306.11400
|
Yongzhu Miao
|
Yongzhu Miao, Shasha Li, Jintao Tang and Ting Wang
|
MuDPT: Multi-modal Deep-symphysis Prompt Tuning for Large Pre-trained
Vision-Language Models
|
The paper has been accepted by ICME 2023
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prompt tuning, like CoOp, has recently shown promising vision recognizing and
transfer learning ability on various downstream tasks with the emergence of
large pre-trained vision-language models like CLIP. However, we identify that
existing uni-modal prompt tuning approaches may result in sub-optimal
performance since this uni-modal design breaks the original alignment of
textual and visual representations in the pre-trained model. Inspired by the
nature of pre-trained vision-language models, we aim to achieve completeness in
prompt tuning and propose a novel approach called Multi-modal Deep-symphysis
Prompt Tuning, dubbed as MuDPT, which extends independent multi-modal prompt
tuning by additionally learning a model-agnostic transformative network to
allow deep hierarchical bi-directional prompt fusion. We evaluate the
effectiveness of MuDPT on few-shot vision recognition and out-of-domain
generalization tasks. Compared with the state-of-the-art methods, MuDPT
achieves better recognition and generalization ability with an apparent margin
thanks to synergistic alignment of textual and visual representations. Our code
is available at: https://github.com/Mechrev0/MuDPT.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 09:15:52 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Miao",
"Yongzhu",
""
],
[
"Li",
"Shasha",
""
],
[
"Tang",
"Jintao",
""
],
[
"Wang",
"Ting",
""
]
] |
new_dataset
| 0.99743 |
2306.11417
|
Chenghao Liu
|
Chenghao Liu, Wenzhuo Yang, Himanshu Mittal, Manpreet Singh, Doyen
Sahoo, Steven C. H. Hoi
|
PyRCA: A Library for Metric-based Root Cause Analysis
|
Github repo: https://github.com/salesforce/PyRCA
| null | null | null |
cs.AI cs.LG cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce PyRCA, an open-source Python machine learning library of Root
Cause Analysis (RCA) for Artificial Intelligence for IT Operations (AIOps). It
provides a holistic framework to uncover the complicated metric causal
dependencies and automatically locate root causes of incidents. It offers a
unified interface for multiple commonly used RCA models, encompassing both
graph construction and scoring tasks. This library aims to provide IT
operations staff, data scientists, and researchers a one-step solution to rapid
model development, model evaluation and deployment to online applications. In
particular, our library includes various causal discovery methods to support
causal graph construction, and multiple types of root cause scoring methods
inspired by Bayesian analysis, graph analysis and causal analysis, etc. Our GUI
dashboard offers practitioners an intuitive point-and-click interface,
empowering them to easily inject expert knowledge through human interaction.
With the ability to visualize causal graphs and the root cause of incidents,
practitioners can quickly gain insights and improve their workflow efficiency.
This technical report introduces PyRCA's architecture and major
functionalities, while also presenting benchmark performance numbers in
comparison to various baseline models. Additionally, we demonstrate PyRCA's
capabilities through several example use cases.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 09:55:10 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Liu",
"Chenghao",
""
],
[
"Yang",
"Wenzhuo",
""
],
[
"Mittal",
"Himanshu",
""
],
[
"Singh",
"Manpreet",
""
],
[
"Sahoo",
"Doyen",
""
],
[
"Hoi",
"Steven C. H.",
""
]
] |
new_dataset
| 0.999474 |
2306.11423
|
Hao Chen
|
Hao Chen
|
New Binary Self-Dual Cyclic Codes with Square-Root-Like Minimum
Distances
|
12 pages
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The construction of self-dual codes over small fields such that their minimum
distances are as large as possible is a long-standing challenging problem in
the coding theory. In 2009, a family of binary self-dual cyclic codes with
lengths $n_i$ and minimum distances $d_i \geq \frac{1}{2} \sqrt{n_i}$, $n_i$
goes to the infinity for $i=1,2, \ldots$, was constructed. In this paper, we
construct a family of (repeated-root) binary self-dual cyclic codes with
lengths $n$ and minimum distances at least $\sqrt{n}-2$. New families of
lengths $n=q^m-1$, $m=3, 5, \ldots$, self-dual codes over ${\bf F}_q$, $q
\equiv 1$ $mod$ $4$, with their minimum distances larger than or equal to
$\sqrt{\frac{q}{2}}\sqrt{n}-q$ are also constructed.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 10:12:38 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Chen",
"Hao",
""
]
] |
new_dataset
| 0.999072 |
2306.11443
|
Yansong Ning
|
Yansong Ning, Hao Liu, Hao Wang, Zhenyu Zeng and Hui Xiong
|
UUKG: Unified Urban Knowledge Graph Dataset for Urban Spatiotemporal
Prediction
| null | null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate Urban SpatioTemporal Prediction (USTP) is of great importance to the
development and operation of the smart city. As an emerging building block,
multi-sourced urban data are usually integrated as urban knowledge graphs
(UrbanKGs) to provide critical knowledge for urban spatiotemporal prediction
models. However, existing UrbanKGs are often tailored for specific downstream
prediction tasks and are not publicly available, which limits the potential
advancement. This paper presents UUKG, the unified urban knowledge graph
dataset for knowledge-enhanced urban spatiotemporal predictions. Specifically,
we first construct UrbanKGs consisting of millions of triplets for two
metropolises by connecting heterogeneous urban entities such as administrative
boroughs, POIs, and road segments. Moreover, we conduct qualitative and
quantitative analysis on constructed UrbanKGs and uncover diverse high-order
structural patterns, such as hierarchies and cycles, that can be leveraged to
benefit downstream USTP tasks. To validate and facilitate the use of UrbanKGs,
we implement and evaluate 15 KG embedding methods on the KG completion task and
integrate the learned KG embeddings into 9 spatiotemporal models for five
different USTP tasks. The extensive experimental results not only provide
benchmarks of knowledge-enhanced USTP models under different task settings but
also highlight the potential of state-of-the-art high-order structure-aware
UrbanKG embedding methods. We hope the proposed UUKG fosters research on urban
knowledge graphs and broad smart city applications. The dataset and source code
are available at https://github.com/usail-hkust/UUKG/.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 10:40:53 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Ning",
"Yansong",
""
],
[
"Liu",
"Hao",
""
],
[
"Wang",
"Hao",
""
],
[
"Zeng",
"Zhenyu",
""
],
[
"Xiong",
"Hui",
""
]
] |
new_dataset
| 0.99968 |
2306.11448
|
Xin Meng
|
Xin Meng, Hongtao Wu, Sipu Ruan, Gregory S. Chirikjian
|
Prepare the Chair for the Bear! Robot Imagination of Sitting Affordance
to Reorient Previously Unseen Chairs
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this letter, a paradigm for the classification and manipulation of
previously unseen objects is established and demonstrated through a real
example of chairs. We present a novel robot manipulation method, guided by the
understanding of object stability, perceptibility, and affordance, which allows
the robot to prepare previously unseen and randomly oriented chairs for a teddy
bear to sit on. Specifically, the robot encounters an unknown object and first
reconstructs a complete 3D model from perceptual data via active and autonomous
manipulation. By inserting this model into a physical simulator (i.e., the
robot's "imagination"), the robot assesses whether the object is a chair and
determines how to reorient it properly to be used, i.e., how to reorient it to
an upright and accessible pose. If the object is classified as a chair, the
robot reorients the object to this pose and seats the teddy bear onto the
chair. The teddy bear is a proxy for an elderly person, hospital patient, or
child. Experiment results show that our method achieves a high success rate on
the real robot task of chair preparation. Also, it outperforms several baseline
methods on the task of upright pose prediction for chairs.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 11:05:32 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Meng",
"Xin",
""
],
[
"Wu",
"Hongtao",
""
],
[
"Ruan",
"Sipu",
""
],
[
"Chirikjian",
"Gregory S.",
""
]
] |
new_dataset
| 0.99729 |
2306.11473
|
Woojay Jeon
|
Woojay Jeon
|
Timestamped Embedding-Matching Acoustic-to-Word CTC ASR
| null | null | null | null |
cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we describe a novel method of training an embedding-matching
word-level connectionist temporal classification (CTC) automatic speech
recognizer (ASR) such that it directly produces word start times and durations,
required by many real-world applications, in addition to the transcription. The
word timestamps enable the ASR to output word segmentations and word confusion
networks without relying on a secondary model or forced alignment process when
testing. Our proposed system has similar word segmentation accuracy as a hybrid
DNN-HMM (Deep Neural Network-Hidden Markov Model) system, with less than 3ms
difference in mean absolute error in word start times on TIMIT data. At the
same time, we observed less than 5% relative increase in the word error rate
compared to the non-timestamped system when using the same audio training data
and nearly identical model size. We also contribute more rigorous analysis of
multiple-hypothesis embedding-matching ASR in general.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 11:53:43 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Jeon",
"Woojay",
""
]
] |
new_dataset
| 0.997405 |
2306.11477
|
Liang Li
|
Liang Li, Ruiying Geng, Chengyang Fang, Bing Li, Can Ma, Rongyu Cao,
Binhua Li, Fei Huang, Yongbin Li
|
CATS: A Pragmatic Chinese Answer-to-Sequence Dataset with Large Scale
and High Quality
|
ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
There are three problems existing in the popular data-to-text datasets.
First, the large-scale datasets either contain noise or lack real application
scenarios. Second, the datasets close to real applications are relatively small
in size. Last, current datasets bias in the English language while leaving
other languages underexplored. To alleviate these limitations, in this paper,
we present CATS, a pragmatic Chinese answer-to-sequence dataset with large
scale and high quality. The dataset aims to generate textual descriptions for
the answer in the practical TableQA system. Further, to bridge the structural
gap between the input SQL and table and establish better semantic alignments,
we propose a Unified Graph Transformation approach to establish a joint
encoding space for the two hybrid knowledge resources and convert this task to
a graph-to-text problem. The experiment results demonstrate the effectiveness
of our proposed method. Further analysis on CATS attests to both the high
quality and challenges of the dataset.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 12:02:26 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Li",
"Liang",
""
],
[
"Geng",
"Ruiying",
""
],
[
"Fang",
"Chengyang",
""
],
[
"Li",
"Bing",
""
],
[
"Ma",
"Can",
""
],
[
"Cao",
"Rongyu",
""
],
[
"Li",
"Binhua",
""
],
[
"Huang",
"Fei",
""
],
[
"Li",
"Yongbin",
""
]
] |
new_dataset
| 0.999854 |
2306.11522
|
Csaba D. Toth
|
Adrian Dumitrescu and Csaba D. T\'oth
|
Observation Routes and External Watchman Routes
|
20 pages, 11 figures. (A 15-page extended abstract of this paper will
appear in the proceedings of WADS 2023.)
| null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce the Observation Route Problem ($\textsf{ORP}$) defined as
follows: Given a set of $n$ pairwise disjoint compact regions in the plane,
find a shortest tour (route) such that an observer walking along this tour can
see (observe) some point in each region from some point of the tour. The
observer does \emph{not} need to see the entire boundary of an object. The tour
is \emph{not} allowed to intersect the interior of any region (i.e., the
regions are obstacles and therefore out of bounds). The problem exhibits
similarity to both the Traveling Salesman Problem with Neighborhoods
($\textsf{TSPN}$) and the External Watchman Route Problem ($\textsf{EWRP}$). We
distinguish two variants: the range of visibility is either limited to a
bounding rectangle, or unlimited. We obtain the following results:
(I) Given a family of $n$ disjoint convex bodies in the plane, computing a
shortest observation route does not admit a $(c\log n)$-approximation unless
$\textsf{P} = \textsf{NP}$ for an absolute constant $c>0$. (This holds for both
limited and unlimited vision.)
(II) Given a family of disjoint convex bodies in the plane, computing a
shortest external watchman route is $\textsf{NP}$-hard. (This holds for both
limited and unlimited vision; and even for families of axis-aligned squares.)
(III) Given a family of $n$ disjoint fat convex polygons, an observation tour
whose length is at most $O(\log{n})$ times the optimal can be computed in
polynomial time. (This holds for limited vision.)
(IV) For every $n \geq 5$, there exists a convex polygon with $n$ sides and
all angles obtuse such that its perimeter is \emph{not} a shortest external
watchman route. This refutes a conjecture by Absar and Whitesides (2006).
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 13:17:04 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Dumitrescu",
"Adrian",
""
],
[
"Tóth",
"Csaba D.",
""
]
] |
new_dataset
| 0.981164 |
2306.11534
|
Mat\'u\v{s} Sul\'ir
|
Mat\'u\v{s} Sul\'ir, Marcel Regeci
|
Software Engineers' Questions and Answers on Stack Exchange
| null |
2022 IEEE 16th International Scientific Conference on Informatics,
IEEE, 2022, pp. 304-310
|
10.1109/Informatics57926.2022.10083403
| null |
cs.SE cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There exists a large number of research works analyzing questions and answers
on the popular Stack Overflow website. However, other sub-sites of the Stack
Exchange platform are studied rarely. In this paper, we analyze the questions
and answers on the Software Engineering Stack Exchange site that encompasses a
broader set of areas, such as testing or software processes. Topics and
quantities of the questions, historical trends, and the authors' sentiment were
analyzed using downloaded datasets. We found that the asked questions are most
frequently related to database systems, quality assurance, and agile software
development. The most attractive topics were career and teamwork problems, and
the least attractive ones were network programming and software modeling.
Historically, the topic of domain-driven design recorded the highest rise, and
jobs and career the most significant fall. The number of new questions dropped,
while the portion of unanswered ones increased.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 13:39:49 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Sulír",
"Matúš",
""
],
[
"Regeci",
"Marcel",
""
]
] |
new_dataset
| 0.985171 |
2306.11541
|
Liying Lu
|
Liying Lu, Tianke Zhang, Yunfei Liu, Xuangeng Chu, Yu Li
|
Audio-Driven 3D Facial Animation from In-the-Wild Videos
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given an arbitrary audio clip, audio-driven 3D facial animation aims to
generate lifelike lip motions and facial expressions for a 3D head. Existing
methods typically rely on training their models using limited public 3D
datasets that contain a restricted number of audio-3D scan pairs. Consequently,
their generalization capability remains limited. In this paper, we propose a
novel method that leverages in-the-wild 2D talking-head videos to train our 3D
facial animation model. The abundance of easily accessible 2D talking-head
videos equips our model with a robust generalization capability. By combining
these videos with existing 3D face reconstruction methods, our model excels in
generating consistent and high-fidelity lip synchronization. Additionally, our
model proficiently captures the speaking styles of different individuals,
allowing it to generate 3D talking-heads with distinct personal styles.
Extensive qualitative and quantitative experimental results demonstrate the
superiority of our method.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 13:53:05 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Lu",
"Liying",
""
],
[
"Zhang",
"Tianke",
""
],
[
"Liu",
"Yunfei",
""
],
[
"Chu",
"Xuangeng",
""
],
[
"Li",
"Yu",
""
]
] |
new_dataset
| 0.997182 |
2306.11546
|
Yiting Dong
|
Yiting Dong, Yang Li, Dongcheng Zhao, Guobin Shen, Yi Zeng
|
Bullying10K: A Neuromorphic Dataset towards Privacy-Preserving Bullying
Recognition
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The prevalence of violence in daily life poses significant threats to
individuals' physical and mental well-being. Using surveillance cameras in
public spaces has proven effective in proactively deterring and preventing such
incidents. However, concerns regarding privacy invasion have emerged due to
their widespread deployment. To address the problem, we leverage Dynamic Vision
Sensors (DVS) cameras to detect violent incidents and preserve privacy since it
captures pixel brightness variations instead of static imagery. We introduce
the Bullying10K dataset, encompassing various actions, complex movements, and
occlusions from real-life scenarios. It provides three benchmarks for
evaluating different tasks: action recognition, temporal action localization,
and pose estimation. With 10,000 event segments, totaling 12 billion events and
255 GB of data, Bullying10K contributes significantly by balancing violence
detection and personal privacy persevering. And it also poses a challenge to
the neuromorphic dataset. It will serve as a valuable resource for training and
developing privacy-protecting video systems. The Bullying10K opens new
possibilities for innovative approaches in these domains.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 13:59:20 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Dong",
"Yiting",
""
],
[
"Li",
"Yang",
""
],
[
"Zhao",
"Dongcheng",
""
],
[
"Shen",
"Guobin",
""
],
[
"Zeng",
"Yi",
""
]
] |
new_dataset
| 0.999837 |
2306.11551
|
Pascal Leroy
|
Pascal Leroy, Pablo G. Morato, Jonathan Pisane, Athanasios Kolios,
Damien Ernst
|
IMP-MARL: a Suite of Environments for Large-scale Infrastructure
Management Planning via MARL
| null | null | null | null |
cs.LG cs.MA cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce IMP-MARL, an open-source suite of multi-agent reinforcement
learning (MARL) environments for large-scale Infrastructure Management Planning
(IMP), offering a platform for benchmarking the scalability of cooperative MARL
methods in real-world engineering applications. In IMP, a multi-component
engineering system is subject to a risk of failure due to its components'
damage condition. Specifically, each agent plans inspections and repairs for a
specific system component, aiming to minimise maintenance costs while
cooperating to minimise system failure risk. With IMP-MARL, we release several
environments including one related to offshore wind structural systems, in an
effort to meet today's needs to improve management strategies to support
sustainable and reliable energy systems. Supported by IMP practical engineering
environments featuring up to 100 agents, we conduct a benchmark campaign, where
the scalability and performance of state-of-the-art cooperative MARL methods
are compared against expert-based heuristic policies. The results reveal that
centralised training with decentralised execution methods scale better with the
number of agents than fully centralised or decentralised RL approaches, while
also outperforming expert-based heuristic policies in most IMP environments.
Based on our findings, we additionally outline remaining cooperation and
scalability challenges that future MARL methods should still address. Through
IMP-MARL, we encourage the implementation of new environments and the further
development of MARL methods.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 14:12:29 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Leroy",
"Pascal",
""
],
[
"Morato",
"Pablo G.",
""
],
[
"Pisane",
"Jonathan",
""
],
[
"Kolios",
"Athanasios",
""
],
[
"Ernst",
"Damien",
""
]
] |
new_dataset
| 0.99933 |
2306.11556
|
Chenbin Li
|
Chenbin Li, Yu Xin, Gaoyi Liu, Xiang Zeng, Ligang Liu
|
NeRF synthesis with shading guidance
|
16 pages, 16 figures, accepted by CAD/Graphics 2023(poster)
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emerging Neural Radiance Field (NeRF) shows great potential in
representing 3D scenes, which can render photo-realistic images from novel view
with only sparse views given. However, utilizing NeRF to reconstruct real-world
scenes requires images from different viewpoints, which limits its practical
application. This problem can be even more pronounced for large scenes. In this
paper, we introduce a new task called NeRF synthesis that utilizes the
structural content of a NeRF patch exemplar to construct a new radiance field
of large size. We propose a two-phase method for synthesizing new scenes that
are continuous in geometry and appearance. We also propose a boundary
constraint method to synthesize scenes of arbitrary size without artifacts.
Specifically, we control the lighting effects of synthesized scenes using
shading guidance instead of decoupling the scene. We have demonstrated that our
method can generate high-quality results with consistent geometry and
appearance, even for scenes with complex lighting. We can also synthesize new
scenes on curved surface with arbitrary lighting effects, which enhances the
practicality of our proposed NeRF synthesis approach.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 14:18:20 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Li",
"Chenbin",
""
],
[
"Xin",
"Yu",
""
],
[
"Liu",
"Gaoyi",
""
],
[
"Zeng",
"Xiang",
""
],
[
"Liu",
"Ligang",
""
]
] |
new_dataset
| 0.999101 |
2306.11565
|
Karmesh Yadav
|
Sriram Yenamandra, Arun Ramachandran, Karmesh Yadav, Austin Wang,
Mukul Khanna, Theophile Gervet, Tsung-Yen Yang, Vidhi Jain, Alexander William
Clegg, John Turner, Zsolt Kira, Manolis Savva, Angel Chang, Devendra Singh
Chaplot, Dhruv Batra, Roozbeh Mottaghi, Yonatan Bisk, Chris Paxton
|
HomeRobot: Open-Vocabulary Mobile Manipulation
|
35 pages, 20 figures, 8 tables
| null | null | null |
cs.RO cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
HomeRobot (noun): An affordable compliant robot that navigates homes and
manipulates a wide range of objects in order to complete everyday tasks.
Open-Vocabulary Mobile Manipulation (OVMM) is the problem of picking any object
in any unseen environment, and placing it in a commanded location. This is a
foundational challenge for robots to be useful assistants in human
environments, because it involves tackling sub-problems from across robotics:
perception, language understanding, navigation, and manipulation are all
essential to OVMM. In addition, integration of the solutions to these
sub-problems poses its own substantial challenges. To drive research in this
area, we introduce the HomeRobot OVMM benchmark, where an agent navigates
household environments to grasp novel objects and place them on target
receptacles. HomeRobot has two components: a simulation component, which uses a
large and diverse curated object set in new, high-quality multi-room home
environments; and a real-world component, providing a software stack for the
low-cost Hello Robot Stretch to encourage replication of real-world experiments
across labs. We implement both reinforcement learning and heuristic
(model-based) baselines and show evidence of sim-to-real transfer. Our
baselines achieve a 20% success rate in the real world; our experiments
identify ways future research work improve performance. See videos on our
website: https://ovmm.github.io/.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 14:30:32 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Yenamandra",
"Sriram",
""
],
[
"Ramachandran",
"Arun",
""
],
[
"Yadav",
"Karmesh",
""
],
[
"Wang",
"Austin",
""
],
[
"Khanna",
"Mukul",
""
],
[
"Gervet",
"Theophile",
""
],
[
"Yang",
"Tsung-Yen",
""
],
[
"Jain",
"Vidhi",
""
],
[
"Clegg",
"Alexander William",
""
],
[
"Turner",
"John",
""
],
[
"Kira",
"Zsolt",
""
],
[
"Savva",
"Manolis",
""
],
[
"Chang",
"Angel",
""
],
[
"Chaplot",
"Devendra Singh",
""
],
[
"Batra",
"Dhruv",
""
],
[
"Mottaghi",
"Roozbeh",
""
],
[
"Bisk",
"Yonatan",
""
],
[
"Paxton",
"Chris",
""
]
] |
new_dataset
| 0.999701 |
2201.00879
|
Geethu Joseph
|
Geethu Joseph, M. Cenk Gursoy, Pramod K. Varshney
|
Temporal Detection of Anomalies via Actor-Critic Based Controlled
Sensing
|
6 pages, 1 figure
| null | null | null |
cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the problem of monitoring a set of binary stochastic processes and
generating an alert when the number of anomalies among them exceeds a
threshold. For this, the decision-maker selects and probes a subset of the
processes to obtain noisy estimates of their states (normal or anomalous).
Based on the received observations, the decisionmaker first determines whether
to declare that the number of anomalies has exceeded the threshold or to
continue taking observations. When the decision is to continue, it then decides
whether to collect observations at the next time instant or defer it to a later
time. If it chooses to collect observations, it further determines the subset
of processes to be probed. To devise this three-step sequential decision-making
process, we use a Bayesian formulation wherein we learn the posterior
probability on the states of the processes. Using the posterior probability, we
construct a Markov decision process and solve it using deep actor-critic
reinforcement learning. Via numerical experiments, we demonstrate the superior
performance of our algorithm compared to the traditional model-based
algorithms.
|
[
{
"version": "v1",
"created": "Mon, 3 Jan 2022 20:59:40 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jun 2023 11:51:06 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Joseph",
"Geethu",
""
],
[
"Gursoy",
"M. Cenk",
""
],
[
"Varshney",
"Pramod K.",
""
]
] |
new_dataset
| 0.964591 |
2203.11400
|
Kiet Nguyen
|
Kiet Van Nguyen, Son Quoc Tran, Luan Thanh Nguyen, Tin Van Huynh, Son
T. Luu, Ngan Luu-Thuy Nguyen
|
VLSP 2021 - ViMRC Challenge: Vietnamese Machine Reading Comprehension
|
The 8th International Workshop on Vietnamese Language and Speech
Processing (VLSP 2021)
| null |
10.25073/2588-1086/vnucsce.340
| null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
One of the emerging research trends in natural language understanding is
machine reading comprehension (MRC) which is the task to find answers to human
questions based on textual data. Existing Vietnamese datasets for MRC research
concentrate solely on answerable questions. However, in reality, questions can
be unanswerable for which the correct answer is not stated in the given textual
data. To address the weakness, we provide the research community with a
benchmark dataset named UIT-ViQuAD 2.0 for evaluating the MRC task and question
answering systems for the Vietnamese language. We use UIT-ViQuAD 2.0 as a
benchmark dataset for the challenge on Vietnamese MRC at the Eighth Workshop on
Vietnamese Language and Speech Processing (VLSP 2021). This task attracted 77
participant teams from 34 universities and other organizations. In this
article, we present details of the organization of the challenge, an overview
of the methods employed by shared-task participants, and the results. The
highest performances are 77.24% in F1-score and 67.43% in Exact Match on the
private test set. The Vietnamese MRC systems proposed by the top 3 teams use
XLM-RoBERTa, a powerful pre-trained language model based on the transformer
architecture. The UIT-ViQuAD 2.0 dataset motivates researchers to further
explore the Vietnamese machine reading comprehension task and related tasks
such as question answering, question generation, and natural language
inference.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 00:44:41 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Mar 2022 23:51:41 GMT"
},
{
"version": "v3",
"created": "Mon, 4 Apr 2022 11:58:38 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Van Nguyen",
"Kiet",
""
],
[
"Tran",
"Son Quoc",
""
],
[
"Nguyen",
"Luan Thanh",
""
],
[
"Van Huynh",
"Tin",
""
],
[
"Luu",
"Son T.",
""
],
[
"Nguyen",
"Ngan Luu-Thuy",
""
]
] |
new_dataset
| 0.999435 |
2205.10003
|
Ioannis Sarridis
|
Ioannis Sarridis, Christos Koutlis, Giorgos Kordopatis-Zilos, Ioannis
Kompatsiaris, Symeon Papadopoulos
|
InDistill: Information flow-preserving knowledge distillation for model
compression
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this paper we introduce InDistill, a model compression approach that
combines knowledge distillation and channel pruning in a unified framework for
the transfer of the critical information flow paths from a heavyweight teacher
to a lightweight student. Such information is typically collapsed in previous
methods due to an encoding stage prior to distillation. By contrast, InDistill
leverages a pruning operation applied to the teacher's intermediate layers
reducing their width to the corresponding student layers' width. In that way,
we force architectural alignment enabling the intermediate layers to be
directly distilled without the need of an encoding stage. Additionally, a
curriculum learning-based training scheme is adopted considering the
distillation difficulty of each layer and the critical learning periods in
which the information flow paths are created. The proposed method surpasses
state-of-the-art performance on three standard benchmarks, i.e. CIFAR-10,
CUB-200, and FashionMNIST by 3.08%, 14.27%, and 1% mAP, respectively, as well
as on more challenging evaluation settings, i.e. ImageNet and CIFAR-100 by
1.97% and 5.65% mAP, respectively.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 07:40:09 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Nov 2022 12:46:14 GMT"
},
{
"version": "v3",
"created": "Fri, 16 Jun 2023 14:32:05 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Sarridis",
"Ioannis",
""
],
[
"Koutlis",
"Christos",
""
],
[
"Kordopatis-Zilos",
"Giorgos",
""
],
[
"Kompatsiaris",
"Ioannis",
""
],
[
"Papadopoulos",
"Symeon",
""
]
] |
new_dataset
| 0.99446 |
2208.10629
|
Anh V. Vu
|
Anh V. Vu, Daniel R. Thomas, Ben Collier, Alice Hutchings, Richard
Clayton, Ross Anderson
|
Getting Bored of Cyberwar: Exploring the Role of Civilian Hacktivists in
the Russia-Ukraine Conflict
| null | null | null | null |
cs.CR cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
There has been substantial commentary on the role of cyberattacks and
civilian hacktivists in the Russia-Ukraine conflict. Drawing on a range of data
sources, we argue that the widely-held narrative of a significant cyberwar
fought by committed civilians and volunteer `hacktivists' linked to cybercrime
groups has likely been overhyped. We collected 358k web defacement attacks,
1.7M reflected DDoS attacks, and 441 announcements (with 58k replies) of a
volunteer hacking discussion group for two months before and four months after
the invasion. To enrich our quantitative understanding, we conducted interviews
with individuals who were active in defacing Russian and Ukrainian websites.
Our findings indicate that the conflict briefly but significantly caught the
attention of the low-level cybercrime community, with notable increases in both
defacement and DDoS attacks targeting Russia and Ukraine. However, the role of
these players in the so-called cyberwarfare is minor, and they do not resemble
the `hacktivists' imagined in popular criminological accounts. Initial waves of
interest led to more attackers participating in defacement campaigns, but
rather than targeting critical infrastructure, there were mass attacks against
random websites within `.ru' and `.ua'. We find little evidence of high-profile
actions of the kind hypothesised by the prevalent narrative. The much-vaunted
role of the IT Army of Ukraine co-ordination group is mixed; their promoted
targets were seldom defaced although sometimes subjected to DDoS attacks. Our
main finding is that there was a clear loss of interest in carrying out
defacement and DDoS attacks after just a few weeks. Contrary to the prediction
of some commentators, the involvement of civilian hacktivists from low-level
crime groups in the conflict appears to have been minor, short-lived, and
fleeting.
|
[
{
"version": "v1",
"created": "Mon, 22 Aug 2022 22:11:04 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Aug 2022 10:44:56 GMT"
},
{
"version": "v3",
"created": "Sat, 3 Dec 2022 11:33:45 GMT"
},
{
"version": "v4",
"created": "Fri, 16 Jun 2023 14:07:23 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Vu",
"Anh V.",
""
],
[
"Thomas",
"Daniel R.",
""
],
[
"Collier",
"Ben",
""
],
[
"Hutchings",
"Alice",
""
],
[
"Clayton",
"Richard",
""
],
[
"Anderson",
"Ross",
""
]
] |
new_dataset
| 0.964295 |
2210.00716
|
Xin Liu
|
Xin Liu, Girish Narayanswamy, Akshay Paruchuri, Xiaoyu Zhang, Jiankai
Tang, Yuzhe Zhang, Yuntao Wang, Soumyadip Sengupta, Shwetak Patel, Daniel
McDuff
|
rPPG-Toolbox: Deep Remote PPG Toolbox
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Camera-based physiological measurement is a fast growing field of computer
vision. Remote photoplethysmography (rPPG) utilizes imaging devices (e.g.,
cameras) to measure the peripheral blood volume pulse (BVP) via
photoplethysmography, and enables cardiac measurement via webcams and
smartphones. However, the task is non-trivial with important pre-processing,
modeling, and post-processing steps required to obtain state-of-the-art
results. Replication of results and benchmarking of new models is critical for
scientific progress; however, as with many other applications of deep learning,
reliable codebases are not easy to find or use. We present a comprehensive
toolbox, rPPG-Toolbox, that contains unsupervised and supervised rPPG models
with support for public benchmark datasets, data augmentation, and systematic
evaluation: \url{https://github.com/ubicomplab/rPPG-Toolbox}
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 05:11:24 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jun 2023 04:12:19 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Liu",
"Xin",
""
],
[
"Narayanswamy",
"Girish",
""
],
[
"Paruchuri",
"Akshay",
""
],
[
"Zhang",
"Xiaoyu",
""
],
[
"Tang",
"Jiankai",
""
],
[
"Zhang",
"Yuzhe",
""
],
[
"Wang",
"Yuntao",
""
],
[
"Sengupta",
"Soumyadip",
""
],
[
"Patel",
"Shwetak",
""
],
[
"McDuff",
"Daniel",
""
]
] |
new_dataset
| 0.994874 |
2210.03347
|
Mandar Joshi
|
Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian
Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina
Toutanova
|
Pix2Struct: Screenshot Parsing as Pretraining for Visual Language
Understanding
|
Accepted at ICML
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Visually-situated language is ubiquitous -- sources range from textbooks with
diagrams to web pages with images and tables, to mobile apps with buttons and
forms. Perhaps due to this diversity, previous work has typically relied on
domain-specific recipes with limited sharing of the underlying data, model
architectures, and objectives. We present Pix2Struct, a pretrained
image-to-text model for purely visual language understanding, which can be
finetuned on tasks containing visually-situated language. Pix2Struct is
pretrained by learning to parse masked screenshots of web pages into simplified
HTML. The web, with its richness of visual elements cleanly reflected in the
HTML structure, provides a large source of pretraining data well suited to the
diversity of downstream tasks. Intuitively, this objective subsumes common
pretraining signals such as OCR, language modeling, image captioning. In
addition to the novel pretraining strategy, we introduce a variable-resolution
input representation and a more flexible integration of language and vision
inputs, where language prompts such as questions are rendered directly on top
of the input image. For the first time, we show that a single pretrained model
can achieve state-of-the-art results in six out of nine tasks across four
domains: documents, illustrations, user interfaces, and natural images.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 06:42:06 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2023 21:34:23 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Lee",
"Kenton",
""
],
[
"Joshi",
"Mandar",
""
],
[
"Turc",
"Iulia",
""
],
[
"Hu",
"Hexiang",
""
],
[
"Liu",
"Fangyu",
""
],
[
"Eisenschlos",
"Julian",
""
],
[
"Khandelwal",
"Urvashi",
""
],
[
"Shaw",
"Peter",
""
],
[
"Chang",
"Ming-Wei",
""
],
[
"Toutanova",
"Kristina",
""
]
] |
new_dataset
| 0.997442 |
2210.13094
|
Shahrzad Heydarshahi
|
Florent Becker and Shahrzad Heydarshahi
|
DNA tile self-assembly for 3D-surfaces: Towards genus identification
| null | null | null | null |
cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a new DNA tile self-assembly model: the Surface Flexible Tile
Assembly Model (SFTAM), where 2D tiles are placed on host 3D surfaces made of
axis-parallel unit cubes glued together by their faces, called polycubes. The
bonds are flexible, so that the assembly can bind on the edges of the polycube.
We are interested in the study of SFTAM self-assemblies on 3D surfaces which
are not always embeddable in the Euclidean plane, in order to compare their
different behaviors and to compute the topological properties of the host
surfaces.
We focus on a family of polycubes called cuboids. Order-0 cuboids are
polycubes that have six rectangular faces, and order-1 cuboids are made from
two order-0 cuboids by substracting one from the other. Thus, order-1 cuboids
can be of genus 0 or of genus 1 (then they contain a tunnel). We are interested
in the genus of these structures, and we present a SFTAM tile assembly system
that determines the genus of a given order-1 cuboid. The SFTAM tile assembly
system which we design, contains a specific set $Y$ of tile types with the
following properties. If the assembly is made on a host order-1 cuboid $C$ of
genus 0, no tile of $Y$ appears in any producible assembly, but if $C$ has
genus 1, every terminal assembly contains at least one tile of $Y$.
Thus, we are able to distinguish the host surfaces according to their genus,
by the tiles used in the assembly. This system is specific to order-1 cuboids
but the techniques we use should be generalizable to other families of shapes.
|
[
{
"version": "v1",
"created": "Mon, 24 Oct 2022 10:24:03 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 19:31:13 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Jun 2023 20:55:02 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Becker",
"Florent",
""
],
[
"Heydarshahi",
"Shahrzad",
""
]
] |
new_dataset
| 0.998865 |
2211.06588
|
Haodong Ouyang
|
Haodong Ouyang
|
DEYO: DETR with YOLO for Step-by-Step Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object detection is an important topic in computer vision, with
post-processing, an essential part of the typical object detection pipeline,
posing a significant bottleneck affecting the performance of traditional object
detection models. The detection transformer (DETR), as the first end-to-end
target detection model, discards the requirement of manual components like the
anchor and non-maximum suppression (NMS), significantly simplifying the target
detection process. However, compared with most traditional object detection
models, DETR converges very slowly, and a query's meaning is obscure. Thus,
inspired by the Step-by-Step concept, this paper proposes a new two-stage
object detection model, named DETR with YOLO (DEYO), which relies on a
progressive inference to solve the above problems. DEYO is a two-stage
architecture comprising a classic target detection model and a DETR-like model
as the first and second stages, respectively. Specifically, the first stage
provides high-quality query and anchor feeding into the second stage, improving
the performance and efficiency of the second stage compared to the original
DETR model. Meanwhile, the second stage compensates for the performance
degradation caused by the first stage detector's limitations. Extensive
experiments demonstrate that DEYO attains 50.6 AP and 52.1 AP in 12 and 36
epochs, respectively, while utilizing ResNet-50 as the backbone and multi-scale
features on the COCO dataset. Compared with DINO, an optimal DETR-like model,
the developed DEYO model affords a significant performance improvement of 1.6
AP and 1.2 AP in two epoch settings.
|
[
{
"version": "v1",
"created": "Sat, 12 Nov 2022 06:36:17 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Nov 2022 22:07:57 GMT"
},
{
"version": "v3",
"created": "Fri, 16 Jun 2023 03:49:48 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Ouyang",
"Haodong",
""
]
] |
new_dataset
| 0.997643 |
2211.11202
|
Tianyuan Dai
|
Hao Zhang, Tianyuan Dai, Yu-Wing Tai, Chi-Keung Tang
|
FLNeRF: 3D Facial Landmarks Estimation in Neural Radiance Fields
|
Hao Zhang and Tianyuan Dai contributed equally. Project website:
https://github.com/ZHANG1023/FLNeRF
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the first significant work on directly predicting 3D face
landmarks on neural radiance fields (NeRFs). Our 3D coarse-to-fine Face
Landmarks NeRF (FLNeRF) model efficiently samples from a given face NeRF with
individual facial features for accurate landmarks detection. Expression
augmentation is applied to facial features in a fine scale to simulate large
emotions range including exaggerated facial expressions (e.g., cheek blowing,
wide opening mouth, eye blinking) for training FLNeRF. Qualitative and
quantitative comparison with related state-of-the-art 3D facial landmark
estimation methods demonstrate the efficacy of FLNeRF, which contributes to
downstream tasks such as high-quality face editing and swapping with direct
control using our NeRF landmarks. Code and data will be available. Github link:
https://github.com/ZHANG1023/FLNeRF.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 06:26:01 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Nov 2022 02:58:12 GMT"
},
{
"version": "v3",
"created": "Fri, 16 Jun 2023 10:52:13 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Zhang",
"Hao",
""
],
[
"Dai",
"Tianyuan",
""
],
[
"Tai",
"Yu-Wing",
""
],
[
"Tang",
"Chi-Keung",
""
]
] |
new_dataset
| 0.97428 |
2211.16697
|
Haoran Xie
|
Tianyu Zhang, Xusheng Du, Chia-Ming Chang, Xi Yang, Haoran Xie
|
SGDraw: Scene Graph Drawing Interface Using Object-Oriented
Representation
|
16 pages, 9 figures, video is https://youtu.be/acy0SNLfahg, accepted
in HCI International 2023
| null | null | null |
cs.CV cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Scene understanding is an essential and challenging task in computer vision.
To provide the visually fundamental graphical structure of an image, the scene
graph has received increased attention due to its powerful semantic
representation. However, it is difficult to draw a proper scene graph for image
retrieval, image generation, and multi-modal applications. The conventional
scene graph annotation interface is not easy to use in image annotations, and
the automatic scene graph generation approaches using deep neural networks are
prone to generate redundant content while disregarding details. In this work,
we propose SGDraw, a scene graph drawing interface using object-oriented scene
graph representation to help users draw and edit scene graphs interactively.
For the proposed object-oriented representation, we consider the objects,
attributes, and relationships of objects as a structural unit. SGDraw provides
a web-based scene graph annotation and generation tool for scene understanding
applications. To verify the effectiveness of the proposed interface, we
conducted a comparison study with the conventional tool and the user experience
study. The results show that SGDraw can help generate scene graphs with richer
details and describe the images more accurately than traditional bounding box
annotations. We believe the proposed SGDraw can be useful in various vision
tasks, such as image retrieval and generation.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 02:35:09 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jun 2023 09:02:16 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Zhang",
"Tianyu",
""
],
[
"Du",
"Xusheng",
""
],
[
"Chang",
"Chia-Ming",
""
],
[
"Yang",
"Xi",
""
],
[
"Xie",
"Haoran",
""
]
] |
new_dataset
| 0.983521 |
2212.09381
|
Jianwu Fang
|
Jianwu Fang, Lei-Lei Li, Kuan Yang, Zhedong Zheng, Jianru Xue, and
Tat-Seng Chua
|
Cognitive Accident Prediction in Driving Scenes: A Multimodality
Benchmark
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traffic accident prediction in driving videos aims to provide an early
warning of the accident occurrence, and supports the decision making of safe
driving systems. Previous works usually concentrate on the spatial-temporal
correlation of object-level context, while they do not fit the inherent
long-tailed data distribution well and are vulnerable to severe environmental
change. In this work, we propose a Cognitive Accident Prediction (CAP) method
that explicitly leverages human-inspired cognition of text description on the
visual observation and the driver attention to facilitate model training. In
particular, the text description provides a dense semantic description guidance
for the primary context of the traffic scene, while the driver attention
provides a traction to focus on the critical region closely correlating with
safe driving. CAP is formulated by an attentive text-to-vision shift fusion
module, an attentive scene context transfer module, and the driver attention
guided accident prediction module. We leverage the attention mechanism in these
modules to explore the core semantic cues for accident prediction. In order to
train CAP, we extend an existing self-collected DADA-2000 dataset (with
annotated driver attention for each frame) with further factual text
descriptions for the visual observations before the accidents. Besides, we
construct a new large-scale benchmark consisting of 11,727 in-the-wild accident
videos with over 2.19 million frames (named as CAP-DATA) together with labeled
fact-effect-reason-introspection description and temporal accident frame label.
Based on extensive experiments, the superiority of CAP is validated compared
with state-of-the-art approaches. The code, CAP-DATA, and all results will be
released in \url{https://github.com/JWFanggit/LOTVS-CAP}.
|
[
{
"version": "v1",
"created": "Mon, 19 Dec 2022 11:43:02 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jun 2023 13:29:45 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Fang",
"Jianwu",
""
],
[
"Li",
"Lei-Lei",
""
],
[
"Yang",
"Kuan",
""
],
[
"Zheng",
"Zhedong",
""
],
[
"Xue",
"Jianru",
""
],
[
"Chua",
"Tat-Seng",
""
]
] |
new_dataset
| 0.951981 |
2212.10525
|
Suwon Shon
|
Suwon Shon, Siddhant Arora, Chyi-Jiunn Lin, Ankita Pasad, Felix Wu,
Roshan Sharma, Wei-Lun Wu, Hung-Yi Lee, Karen Livescu, Shinji Watanabe
|
SLUE Phase-2: A Benchmark Suite of Diverse Spoken Language Understanding
Tasks
|
accepted in ACL 2023 (long paper)
| null | null | null |
cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spoken language understanding (SLU) tasks have been studied for many decades
in the speech research community, but have not received as much attention as
lower-level tasks like speech and speaker recognition. In particular, there are
not nearly as many SLU task benchmarks, and many of the existing ones use data
that is not freely available to all researchers. Recent work has begun to
introduce such benchmark datasets for several tasks. In this work, we introduce
several new annotated SLU benchmark tasks based on freely available speech
data, which complement existing benchmarks and address gaps in the SLU
evaluation landscape. We contribute four tasks: question answering and
summarization involve inference over longer speech sequences; named entity
localization addresses the speech-specific task of locating the targeted
content in the signal; dialog act classification identifies the function of a
given speech utterance. We follow the blueprint of the Spoken Language
Understanding Evaluation (SLUE) benchmark suite. In order to facilitate the
development of SLU models that leverage the success of pre-trained speech
representations, we will be publishing for each task (i) annotations for a
relatively small fine-tuning set, (ii) annotated development and test sets, and
(iii) baseline models for easy reproducibility and comparisons. In this work,
we present the details of data collection and annotation and the performance of
the baseline models. We also perform sensitivity analysis of pipeline models'
performance (speech recognizer + text model) to the speech recognition
accuracy, using more than 20 state-of-the-art speech recognition models.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 18:39:59 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2023 22:51:09 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Shon",
"Suwon",
""
],
[
"Arora",
"Siddhant",
""
],
[
"Lin",
"Chyi-Jiunn",
""
],
[
"Pasad",
"Ankita",
""
],
[
"Wu",
"Felix",
""
],
[
"Sharma",
"Roshan",
""
],
[
"Wu",
"Wei-Lun",
""
],
[
"Lee",
"Hung-Yi",
""
],
[
"Livescu",
"Karen",
""
],
[
"Watanabe",
"Shinji",
""
]
] |
new_dataset
| 0.999763 |
2304.05088
|
Marius Bock
|
Marius Bock, Hilde Kuehne, Kristof Van Laerhoven, Michael Moeller
|
WEAR: An Outdoor Sports Dataset for Wearable and Egocentric Activity
Recognition
|
14 pages, 3 figures, 2 tables
| null | null | null |
cs.CV cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Though research has shown the complementarity of camera- and inertial-based
data, datasets which offer both modalities remain scarce. In this paper, we
introduce WEAR, an outdoor sports dataset for both vision- and inertial-based
human activity recognition (HAR). The dataset comprises data from 18
participants performing a total of 18 different workout activities with
untrimmed inertial (acceleration) and camera (egocentric video) data recorded
at 10 different outside locations. Unlike previous egocentric datasets, WEAR
provides a challenging prediction scenario marked by purposely introduced
activity variations as well as an overall small information overlap across
modalities. Provided benchmark results reveal that single-modality
architectures each have different strengths and weaknesses in their prediction
performance. Further, in light of the recent success of transformer-based
temporal action localization models, we demonstrate their versatility by
applying them in a plain fashion using vision, inertial and combined (vision +
inertial) features as input. Results demonstrate both the applicability of
vision-based transformers for inertial data and fusing both modalities by means
of simple concatenation, with the combined approach (vision + inertial
features) being able to produce the highest mean average precision and
close-to-best F1-score. The dataset and code to reproduce experiments is
publicly available via: https://mariusbock.github.io/wear/
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 09:31:07 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jun 2023 07:46:34 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Bock",
"Marius",
""
],
[
"Kuehne",
"Hilde",
""
],
[
"Van Laerhoven",
"Kristof",
""
],
[
"Moeller",
"Michael",
""
]
] |
new_dataset
| 0.999733 |
2305.01818
|
Fabio Pavanello
|
Fabio Pavanello, Elena Ioana Vatajelu, Alberto Bosio, Thomas Van
Vaerenbergh, Peter Bienstman, Benoit Charbonnier, Alessio Carpegna, Stefano
Di Carlo, Alessandro Savino
|
Special Session: Neuromorphic hardware design and reliability from
traditional CMOS to emerging technologies
|
10 pages, 4 figures, 4 tables
|
2023 IEEE 41st VLSI Test Symposium (VTS)
|
10.1109/VTS56346.2023.10139932
| null |
cs.ET eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The field of neuromorphic computing has been rapidly evolving in recent
years, with an increasing focus on hardware design and reliability. This
special session paper provides an overview of the recent developments in
neuromorphic computing, focusing on hardware design and reliability. We first
review the traditional CMOS-based approaches to neuromorphic hardware design
and identify the challenges related to scalability, latency, and power
consumption. We then investigate alternative approaches based on emerging
technologies, specifically integrated photonics approaches within the NEUROPULS
project. Finally, we examine the impact of device variability and aging on the
reliability of neuromorphic hardware and present techniques for mitigating
these effects. This review is intended to serve as a valuable resource for
researchers and practitioners in neuromorphic computing.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 22:55:24 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Pavanello",
"Fabio",
""
],
[
"Vatajelu",
"Elena Ioana",
""
],
[
"Bosio",
"Alberto",
""
],
[
"Van Vaerenbergh",
"Thomas",
""
],
[
"Bienstman",
"Peter",
""
],
[
"Charbonnier",
"Benoit",
""
],
[
"Carpegna",
"Alessio",
""
],
[
"Di Carlo",
"Stefano",
""
],
[
"Savino",
"Alessandro",
""
]
] |
new_dataset
| 0.969488 |
2305.04105
|
Mohsinul Kabir
|
Faria Binte Kader, Nafisa Hossain Nujat, Tasmia Binte Sogir, Mohsinul
Kabir, Hasan Mahmud and Kamrul Hasan
|
"When Words Fail, Emojis Prevail": Generating Sarcastic Utterances with
Emoji Using Valence Reversal and Semantic Incongruity
|
Accepted in the 61st Annual Meeting of the Association for
Computational Linguistics: Student Research Workshop (ACL SRW 2023)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Sarcasm is a form of figurative language that serves as a humorous tool for
mockery and ridicule. We present a novel architecture for sarcasm generation
with emoji from a non-sarcastic input sentence in English. We divide the
generation task into two sub tasks: one for generating textual sarcasm and
another for collecting emojis associated with those sarcastic sentences. Two
key elements of sarcasm are incorporated into the textual sarcasm generation
task: valence reversal and semantic incongruity with context, where the context
may involve shared commonsense or general knowledge between the speaker and
their audience. The majority of existing sarcasm generation works have focused
on this textual form. However, in the real world, when written texts fall short
of effectively capturing the emotional cues of spoken and face-to-face
communication, people often opt for emojis to accurately express their
emotions. Due to the wide range of applications of emojis, incorporating
appropriate emojis to generate textual sarcastic sentences helps advance
sarcasm generation. We conclude our study by evaluating the generated sarcastic
sentences using human judgement. All the codes and data used in this study has
been made publicly available.
|
[
{
"version": "v1",
"created": "Sat, 6 May 2023 17:49:41 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jun 2023 15:11:03 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Kader",
"Faria Binte",
""
],
[
"Nujat",
"Nafisa Hossain",
""
],
[
"Sogir",
"Tasmia Binte",
""
],
[
"Kabir",
"Mohsinul",
""
],
[
"Mahmud",
"Hasan",
""
],
[
"Hasan",
"Kamrul",
""
]
] |
new_dataset
| 0.996262 |
2305.06940
|
Ning Ding
|
Ning Ding, Ce Zhang, Azim Eskandarian
|
SalienDet: A Saliency-based Feature Enhancement Algorithm for Object
Detection for Autonomous Driving
|
This paper is accepted and being published at IEEE Transactions on
Intelligent Vehicles
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object detection (OD) is crucial to autonomous driving. On the other hand,
unknown objects, which have not been seen in training sample set, are one of
the reasons that hinder autonomous vehicles from driving beyond the operational
domain. To addresss this issue, we propose a saliency-based OD algorithm
(SalienDet) to detect unknown objects. Our SalienDet utilizes a saliency-based
algorithm to enhance image features for object proposal generation. Moreover,
we design a dataset relabeling approach to differentiate the unknown objects
from all objects in training sample set to achieve Open-World Detection. To
validate the performance of SalienDet, we evaluate SalienDet on KITTI,
nuScenes, and BDD datasets, and the result indicates that it outperforms
existing algorithms for unknown object detection. Notably, SalienDet can be
easily adapted for incremental learning in open-world detection tasks. The
project page is
\url{https://github.com/dingmike001/SalienDet-Open-Detection.git}.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 16:19:44 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2023 05:28:33 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Ding",
"Ning",
""
],
[
"Zhang",
"Ce",
""
],
[
"Eskandarian",
"Azim",
""
]
] |
new_dataset
| 0.993761 |
2305.08295
|
Wei-I Lin
|
Hsiu-Hsuan Wang, Wei-I Lin, Hsuan-Tien Lin
|
CLCIFAR: CIFAR-Derived Benchmark Datasets with Human Annotated
Complementary Labels
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Complementary-label learning (CLL) is a weakly-supervised learning paradigm
that aims to train a multi-class classifier using only complementary labels,
which indicate classes to which an instance does not belong. Despite numerous
algorithmic proposals for CLL, their practical performance remains unclear for
two reasons. Firstly, these algorithms often rely on assumptions about the
generation of complementary labels. Secondly, their evaluation has been limited
to synthetic datasets. To gain insights into the real-world performance of CLL
algorithms, we developed a protocol to collect complementary labels annotated
by human annotators. This effort resulted in the creation of two datasets,
CLCIFAR10 and CLCIFAR20, derived from CIFAR10 and CIFAR100, respectively. These
datasets, publicly released at https://github.com/ntucllab/complementary_cifar,
represent the very first real-world CLL datasets. Through extensive benchmark
experiments, we discovered a notable decline in performance when transitioning
from synthetic datasets to real-world datasets. We conducted a dataset-level
ablation study to investigate the key factors contributing to this decline. Our
analyses highlighted annotation noise as the most influential factor present in
the real-world datasets. Additionally, the biased nature of human-annotated
complementary labels was found to make certain CLL algorithms more susceptible
to overfitting. These findings suggest the community to spend more research
effort on developing CLL algorithms that are robust to noisy and biased
complementary-label distributions.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 01:48:53 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jun 2023 05:51:30 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Wang",
"Hsiu-Hsuan",
""
],
[
"Lin",
"Wei-I",
""
],
[
"Lin",
"Hsuan-Tien",
""
]
] |
new_dataset
| 0.999359 |
2306.07279
|
Tiange Luo
|
Tiange Luo, Chris Rockwell, Honglak Lee, Justin Johnson
|
Scalable 3D Captioning with Pretrained Models
|
Dataset link: https://huggingface.co/datasets/tiange/Cap3D
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce Cap3D, an automatic approach for generating descriptive text for
3D objects. This approach utilizes pretrained models from image captioning,
image-text alignment, and LLM to consolidate captions from multiple views of a
3D asset, completely side-stepping the time-consuming and costly process of
manual annotation. We apply Cap3D to the recently introduced large-scale 3D
dataset, Objaverse, resulting in 660k 3D-text pairs. Our evaluation, conducted
using 41k human annotations from the same dataset, demonstrates that Cap3D
surpasses human-authored descriptions in terms of quality, cost, and speed.
Through effective prompt engineering, Cap3D rivals human performance in
generating geometric descriptions on 17k collected annotations from the ABO
dataset. Finally, we finetune Text-to-3D models on Cap3D and human captions,
and show Cap3D outperforms; and benchmark the SOTA including Point-E, Shape-E,
and DreamFusion.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 17:59:03 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jun 2023 03:58:15 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Luo",
"Tiange",
""
],
[
"Rockwell",
"Chris",
""
],
[
"Lee",
"Honglak",
""
],
[
"Johnson",
"Justin",
""
]
] |
new_dataset
| 0.99464 |
2306.08183
|
Kelly Marshall
|
Kelly O. Marshall, Minh Pham, Ameya Joshi, Anushrut Jignasu, Aditya
Balu, Adarsh Krishnamurthy, Chinmay Hegde
|
ZeroForge: Feedforward Text-to-Shape Without 3D Supervision
|
19 pages, High resolution figures needed to demonstrate 3D results
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Current state-of-the-art methods for text-to-shape generation either require
supervised training using a labeled dataset of pre-defined 3D shapes, or
perform expensive inference-time optimization of implicit neural
representations. In this work, we present ZeroForge, an approach for zero-shot
text-to-shape generation that avoids both pitfalls. To achieve open-vocabulary
shape generation, we require careful architectural adaptation of existing
feed-forward approaches, as well as a combination of data-free CLIP-loss and
contrastive losses to avoid mode collapse. Using these techniques, we are able
to considerably expand the generative ability of existing feed-forward
text-to-shape models such as CLIP-Forge. We support our method via extensive
qualitative and quantitative evaluations
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 00:38:14 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jun 2023 00:48:13 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Marshall",
"Kelly O.",
""
],
[
"Pham",
"Minh",
""
],
[
"Joshi",
"Ameya",
""
],
[
"Jignasu",
"Anushrut",
""
],
[
"Balu",
"Aditya",
""
],
[
"Krishnamurthy",
"Adarsh",
""
],
[
"Hegde",
"Chinmay",
""
]
] |
new_dataset
| 0.999696 |
2306.09346
|
Yossi Gandelsman
|
Amil Dravid, Yossi Gandelsman, Alexei A. Efros, Assaf Shocher
|
Rosetta Neurons: Mining the Common Units in a Model Zoo
|
Project page: https://yossigandelsman.github.io/rosetta_neurons/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Do different neural networks, trained for various vision tasks, share some
common representations? In this paper, we demonstrate the existence of common
features we call "Rosetta Neurons" across a range of models with different
architectures, different tasks (generative and discriminative), and different
types of supervision (class-supervised, text-supervised, self-supervised). We
present an algorithm for mining a dictionary of Rosetta Neurons across several
popular vision models: Class Supervised-ResNet50, DINO-ResNet50, DINO-ViT, MAE,
CLIP-ResNet50, BigGAN, StyleGAN-2, StyleGAN-XL. Our findings suggest that
certain visual concepts and structures are inherently embedded in the natural
world and can be learned by different models regardless of the specific task or
architecture, and without the use of semantic labels. We can visualize shared
concepts directly due to generative models included in our analysis. The
Rosetta Neurons facilitate model-to-model translation enabling various
inversion-based manipulations, including cross-class alignments, shifting,
zooming, and more, without the need for specialized training.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 17:59:54 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jun 2023 04:36:31 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Dravid",
"Amil",
""
],
[
"Gandelsman",
"Yossi",
""
],
[
"Efros",
"Alexei A.",
""
],
[
"Shocher",
"Assaf",
""
]
] |
new_dataset
| 0.987119 |
2306.09349
|
Zhi-Hao Lin
|
Zhi-Hao Lin, Bohan Liu, Yi-Ting Chen, David Forsyth, Jia-Bin Huang,
Anand Bhattad, Shenlong Wang
|
UrbanIR: Large-Scale Urban Scene Inverse Rendering from a Single Video
|
https://urbaninverserendering.github.io/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We show how to build a model that allows realistic, free-viewpoint renderings
of a scene under novel lighting conditions from video. Our method -- UrbanIR:
Urban Scene Inverse Rendering -- computes an inverse graphics representation
from the video. UrbanIR jointly infers shape, albedo, visibility, and sun and
sky illumination from a single video of unbounded outdoor scenes with unknown
lighting. UrbanIR uses videos from cameras mounted on cars (in contrast to many
views of the same points in typical NeRF-style estimation). As a result,
standard methods produce poor geometry estimates (for example, roofs), and
there are numerous ''floaters''. Errors in inverse graphics inference can
result in strong rendering artifacts. UrbanIR uses novel losses to control
these and other sources of error. UrbanIR uses a novel loss to make very good
estimates of shadow volumes in the original scene. The resulting
representations facilitate controllable editing, delivering photorealistic
free-viewpoint renderings of relit scenes and inserted objects. Qualitative
evaluation demonstrates strong improvements over the state-of-the-art.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 17:59:59 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jun 2023 02:41:44 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Lin",
"Zhi-Hao",
""
],
[
"Liu",
"Bohan",
""
],
[
"Chen",
"Yi-Ting",
""
],
[
"Forsyth",
"David",
""
],
[
"Huang",
"Jia-Bin",
""
],
[
"Bhattad",
"Anand",
""
],
[
"Wang",
"Shenlong",
""
]
] |
new_dataset
| 0.999496 |
2306.09379
|
Shengqi Xu
|
Shengqi Xu, Shuning Cao, Haoyue Liu, Xueyao Xiao, Yi Chang, Luxin Yan
|
1st Solution Places for CVPR 2023 UG$^2$+ Challenge Track 2.2-Coded
Target Restoration through Atmospheric Turbulence
|
arXiv admin note: text overlap with arXiv:2306.08963
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this technical report, we briefly introduce the solution of our team
VIELab-HUST for coded target restoration through atmospheric turbulence in CVPR
2023 UG$^2$+ Track 2.2. In this task, we propose an efficient multi-stage
framework to restore a high quality image from distorted frames. Specifically,
each distorted frame is initially aligned using image registration to suppress
geometric distortion. We subsequently select the sharpest set of registered
frames by employing a frame selection approach based on image sharpness, and
average them to produce an image that is largely free of geometric distortion,
albeit with blurriness. A learning-based deblurring method is then applied to
remove the residual blur in the averaged image. Finally, post-processing
techniques are utilized to further enhance the quality of the output image. Our
framework is capable of handling different kinds of coded target dataset
provided in the final testing phase, and ranked 1st on the final leaderboard.
Our code will be available at https://github.com/xsqhust/Turbulence_Removal.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 09:06:48 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Xu",
"Shengqi",
""
],
[
"Cao",
"Shuning",
""
],
[
"Liu",
"Haoyue",
""
],
[
"Xiao",
"Xueyao",
""
],
[
"Chang",
"Yi",
""
],
[
"Yan",
"Luxin",
""
]
] |
new_dataset
| 0.993842 |
2306.09389
|
Junjun Yan
|
Junjun Yan, Xinhai Chen, Zhichao Wang, Enqiang Zhoui and Jie Liu
|
ST-PINN: A Self-Training Physics-Informed Neural Network for Partial
Differential Equations
| null | null | null | null |
cs.LG cs.AI physics.comp-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Partial differential equations (PDEs) are an essential computational kernel
in physics and engineering. With the advance of deep learning, physics-informed
neural networks (PINNs), as a mesh-free method, have shown great potential for
fast PDE solving in various applications. To address the issue of low accuracy
and convergence problems of existing PINNs, we propose a self-training
physics-informed neural network, ST-PINN. Specifically, ST-PINN introduces a
pseudo label based self-learning algorithm during training. It employs
governing equation as the pseudo-labeled evaluation index and selects the
highest confidence examples from the sample points to attach the pseudo labels.
To our best knowledge, we are the first to incorporate a self-training
mechanism into physics-informed learning. We conduct experiments on five PDE
problems in different fields and scenarios. The results demonstrate that the
proposed method allows the network to learn more physical information and
benefit convergence. The ST-PINN outperforms existing physics-informed neural
network methods and improves the accuracy by a factor of 1.33x-2.54x. The code
of ST-PINN is available at GitHub: https://github.com/junjun-yan/ST-PINN.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 15:49:13 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Yan",
"Junjun",
""
],
[
"Chen",
"Xinhai",
""
],
[
"Wang",
"Zhichao",
""
],
[
"Zhoui",
"Enqiang",
""
],
[
"Liu",
"Jie",
""
]
] |
new_dataset
| 0.981483 |
2306.09390
|
Hamideh Ghanadian
|
Hamideh Ghanadian, Isar Nejadgholi, Hussein Al Osman
|
ChatGPT for Suicide Risk Assessment on Social Media: Quantitative
Evaluation of Model Performance, Potentials and Limitations
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a novel framework for quantitatively evaluating the
interactive ChatGPT model in the context of suicidality assessment from social
media posts, utilizing the University of Maryland Reddit suicidality dataset.
We conduct a technical evaluation of ChatGPT's performance on this task using
Zero-Shot and Few-Shot experiments and compare its results with those of two
fine-tuned transformer-based models. Additionally, we investigate the impact of
different temperature parameters on ChatGPT's response generation and discuss
the optimal temperature based on the inconclusiveness rate of ChatGPT. Our
results indicate that while ChatGPT attains considerable accuracy in this task,
transformer-based models fine-tuned on human-annotated datasets exhibit
superior performance. Moreover, our analysis sheds light on how adjusting the
ChatGPT's hyperparameters can improve its ability to assist mental health
professionals in this critical task.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 16:01:30 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Ghanadian",
"Hamideh",
""
],
[
"Nejadgholi",
"Isar",
""
],
[
"Osman",
"Hussein Al",
""
]
] |
new_dataset
| 0.960912 |
2306.09424
|
Adam Stewart
|
Adam J. Stewart, Nils Lehmann, Isaac A. Corley, Yi Wang, Yi-Chia
Chang, Nassim Ait Ali Braham, Shradha Sehgal, Caleb Robinson, Arindam
Banerjee
|
SSL4EO-L: Datasets and Foundation Models for Landsat Imagery
| null | null | null | null |
cs.LG cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
The Landsat program is the longest-running Earth observation program in
history, with 50+ years of data acquisition by 8 satellites. The multispectral
imagery captured by sensors onboard these satellites is critical for a wide
range of scientific fields. Despite the increasing popularity of deep learning
and remote sensing, the majority of researchers still use decision trees and
random forests for Landsat image analysis due to the prevalence of small
labeled datasets and lack of foundation models. In this paper, we introduce
SSL4EO-L, the first ever dataset designed for Self-Supervised Learning for
Earth Observation for the Landsat family of satellites (including 3 sensors and
2 product levels) and the largest Landsat dataset in history (5M image
patches). Additionally, we modernize and re-release the L7 Irish and L8 Biome
cloud detection datasets, and introduce the first ML benchmark datasets for
Landsats 4-5 TM and Landsat 7 ETM+ SR. Finally, we pre-train the first
foundation models for Landsat imagery using SSL4EO-L and evaluate their
performance on multiple semantic segmentation tasks. All datasets and model
weights are available via the TorchGeo (https://github.com/microsoft/torchgeo)
library, making reproducibility and experimentation easy, and enabling
scientific advancements in the burgeoning field of remote sensing for a myriad
of downstream applications.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 18:11:20 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Stewart",
"Adam J.",
""
],
[
"Lehmann",
"Nils",
""
],
[
"Corley",
"Isaac A.",
""
],
[
"Wang",
"Yi",
""
],
[
"Chang",
"Yi-Chia",
""
],
[
"Braham",
"Nassim Ait Ali",
""
],
[
"Sehgal",
"Shradha",
""
],
[
"Robinson",
"Caleb",
""
],
[
"Banerjee",
"Arindam",
""
]
] |
new_dataset
| 0.998854 |
2306.09427
|
Jacob Merson
|
Jacob Merson, Catalin Picu, Mark S. Shephard
|
MuMFiM: Multiscale Modeling of Fibrous Materials
| null | null | null | null |
cs.DC cs.MS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article presents MuMFiM, an open source application for multiscale
modeling of fibrous materials on massively parallel computers. MuMFiM uses two
scales to represent fibrous materials such as biological network materials
(extracellular matrix, connective tissue, etc.). It is designed to make use of
multiple levels of parallelism, including distributed parallelism of the macro
and microscales as well as GPU accelerated data-parallelism of the microscale.
Scaling results of the GPU accelerated microscale show that solving microscale
problems concurrently on the GPU can lead to a 1000x speedup over the solution
of a single RVE on the GPU. In addition, we show nearly optimal strong and weak
scaling results of MuMFiM on up to 128 nodes of AiMOS (Rensselaer Polytechnic
Institute) which is composed of IBM AC922 nodes with 6 Volta V100 GPU and 2 20
core Power 9 CPUs each. We also show how MuMFiM can be used to solve problems
of interest to the broader engineering community, in particular providing an
example of the facet capsule ligament (FCL) of the human spine undergoing
uniaxial extension.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 18:21:02 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Merson",
"Jacob",
""
],
[
"Picu",
"Catalin",
""
],
[
"Shephard",
"Mark S.",
""
]
] |
new_dataset
| 0.999457 |
2306.09467
|
Mononito Goswami Mr.
|
Mononito Goswami, Vedant Sanil, Arjun Choudhry, Arvind Srinivasan,
Chalisa Udompanyawit, Artur Dubrawski
|
AQuA: A Benchmarking Tool for Label Quality Assessment
|
Submitted to the 37th Conference on Neural Information Processing
Systems (NeurIPS 2023) Track on Datasets and Benchmarks. Source code can be
found at www.github.com/autonlab/aqua/
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Machine learning (ML) models are only as good as the data they are trained
on. But recent studies have found datasets widely used to train and evaluate ML
models, e.g. ImageNet, to have pervasive labeling errors. Erroneous labels on
the train set hurt ML models' ability to generalize, and they impact evaluation
and model selection using the test set. Consequently, learning in the presence
of labeling errors is an active area of research, yet this field lacks a
comprehensive benchmark to evaluate these methods. Most of these methods are
evaluated on a few computer vision datasets with significant variance in the
experimental protocols. With such a large pool of methods and inconsistent
evaluation, it is also unclear how ML practitioners can choose the right models
to assess label quality in their data. To this end, we propose a benchmarking
environment AQuA to rigorously evaluate methods that enable machine learning in
the presence of label noise. We also introduce a design space to delineate
concrete design choices of label error detection models. We hope that our
proposed design space and benchmark enable practitioners to choose the right
tools to improve their label quality and that our benchmark enables objective
and rigorous evaluation of machine learning tools facing mislabeled data.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 19:42:11 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Goswami",
"Mononito",
""
],
[
"Sanil",
"Vedant",
""
],
[
"Choudhry",
"Arjun",
""
],
[
"Srinivasan",
"Arvind",
""
],
[
"Udompanyawit",
"Chalisa",
""
],
[
"Dubrawski",
"Artur",
""
]
] |
new_dataset
| 0.992088 |
2306.09468
|
Xiaotian Han
|
Xiaotian Han, Jianfeng Chi, Yu Chen, Qifan Wang, Han Zhao, Na Zou, Xia
Hu
|
FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods
| null | null | null | null |
cs.LG cs.AI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces the Fair Fairness Benchmark (\textsf{FFB}), a
benchmarking framework for in-processing group fairness methods. Ensuring
fairness in machine learning is critical for ethical and legal compliance.
However, there exist challenges in comparing and developing of fairness methods
due to inconsistencies in experimental settings, lack of accessible algorithmic
implementations, and limited extensibility of current fairness packages and
tools. To address these issues, we introduce an open-source, standardized
benchmark for evaluating in-processing group fairness methods and provide a
comprehensive analysis of state-of-the-art methods to ensure different notions
of group fairness. This work offers the following key contributions: the
provision of flexible, extensible, minimalistic, and research-oriented
open-source code; the establishment of unified fairness method benchmarking
pipelines; and extensive benchmarking, which yields key insights from
$\mathbf{45,079}$ experiments. We believe our work will significantly
facilitate the growth and development of the fairness research community. The
benchmark, including code and running logs, is available at
https://github.com/ahxt/fair_fairness_benchmark
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 19:51:28 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Han",
"Xiaotian",
""
],
[
"Chi",
"Jianfeng",
""
],
[
"Chen",
"Yu",
""
],
[
"Wang",
"Qifan",
""
],
[
"Zhao",
"Han",
""
],
[
"Zou",
"Na",
""
],
[
"Hu",
"Xia",
""
]
] |
new_dataset
| 0.992625 |
2306.09484
|
Jingxin Li
|
Jingxin Li, Xiaolan Liu and Toktam Mahmoodi
|
Opportunistic Transmission of Distributed Learning Models in Mobile UAVs
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose an opportunistic scheme for the transmission of
model updates from Federated Learning (FL) clients to the server, where clients
are wireless mobile users. This proposal aims to opportunistically take
advantage of the proximity of users to the base station or the general
condition of the wireless transmission channel, rather than traditional
synchronous transmission. In this scheme, during the training, intermediate
model parameters are uploaded to the server, opportunistically and based on the
wireless channel condition. Then, the proactively-transmitted model updates are
used for the global aggregation if the final local model updates are delayed.
We apply this novel model transmission scheme to one of our previous work,
which is a hybrid split and federated learning (HSFL) framework for UAVs.
Simulation results confirm the superiority of using proactive transmission over
the conventional asynchronous aggregation scheme for the staled model by
obtaining higher accuracy and more stable training performance. Test accuracy
increases by up to 13.47% with just one round of extra transmission.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 20:28:29 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Li",
"Jingxin",
""
],
[
"Liu",
"Xiaolan",
""
],
[
"Mahmoodi",
"Toktam",
""
]
] |
new_dataset
| 0.977325 |
2306.09489
|
Matthijs Douze
|
Ed Pizzi and Giorgos Kordopatis-Zilos and Hiral Patel and Gheorghe
Postelnicu and Sugosh Nagavara Ravindra and Akshay Gupta and Symeon
Papadopoulos and Giorgos Tolias and Matthijs Douze
|
The 2023 Video Similarity Dataset and Challenge
| null | null | null | null |
cs.CV cs.AI cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
This work introduces a dataset, benchmark, and challenge for the problem of
video copy detection and localization. The problem comprises two distinct but
related tasks: determining whether a query video shares content with a
reference video ("detection"), and additionally temporally localizing the
shared content within each video ("localization"). The benchmark is designed to
evaluate methods on these two tasks, and simulates a realistic
needle-in-haystack setting, where the majority of both query and reference
videos are "distractors" containing no copied content. We propose a metric that
reflects both detection and localization accuracy. The associated challenge
consists of two corresponding tracks, each with restrictions that reflect
real-world settings. We provide implementation code for evaluation and
baselines. We also analyze the results and methods of the top submissions to
the challenge. The dataset, baseline methods and evaluation code is publicly
available and will be discussed at a dedicated CVPR'23 workshop.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 20:34:43 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Pizzi",
"Ed",
""
],
[
"Kordopatis-Zilos",
"Giorgos",
""
],
[
"Patel",
"Hiral",
""
],
[
"Postelnicu",
"Gheorghe",
""
],
[
"Ravindra",
"Sugosh Nagavara",
""
],
[
"Gupta",
"Akshay",
""
],
[
"Papadopoulos",
"Symeon",
""
],
[
"Tolias",
"Giorgos",
""
],
[
"Douze",
"Matthijs",
""
]
] |
new_dataset
| 0.999869 |
2306.09505
|
Marco Antonio Stranisci
|
Marco Antonio Stranisci, Rossana Damiano, Enrico Mensa, Viviana Patti,
Daniele Radicioni, Tommaso Caselli
|
Wikibio: a Semantic Resource for the Intersectional Analysis of
Biographical Events
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Biographical event detection is a relevant task for the exploration and
comparison of the ways in which people's lives are told and represented. In
this sense, it may support several applications in digital humanities and in
works aimed at exploring bias about minoritized groups. Despite that, there are
no corpora and models specifically designed for this task. In this paper we
fill this gap by presenting a new corpus annotated for biographical event
detection. The corpus, which includes 20 Wikipedia biographies, was compared
with five existing corpora to train a model for the biographical event
detection task. The model was able to detect all mentions of the target-entity
in a biography with an F-score of 0.808 and the entity-related events with an
F-score of 0.859. Finally, the model was used for performing an analysis of
biases about women and non-Western people in Wikipedia biographies.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 20:59:37 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Stranisci",
"Marco Antonio",
""
],
[
"Damiano",
"Rossana",
""
],
[
"Mensa",
"Enrico",
""
],
[
"Patti",
"Viviana",
""
],
[
"Radicioni",
"Daniele",
""
],
[
"Caselli",
"Tommaso",
""
]
] |
new_dataset
| 0.997137 |
2306.09537
|
Zhehui Huang
|
Zhehui Huang, Sumeet Batra, Tao Chen, Rahul Krupani, Tushar Kumar,
Artem Molchanov, Aleksei Petrenko, James A. Preiss, Zhaojing Yang, Gaurav S.
Sukhatme
|
QuadSwarm: A Modular Multi-Quadrotor Simulator for Deep Reinforcement
Learning with Direct Thrust Control
|
Paper published in ICRA 2023 Workshop: The Role of Robotics
Simulators for Unmanned Aerial Vehicles. The workshop can be found in
https://imrclab.github.io/workshop-uav-sims-icra2023/
| null | null | null |
cs.RO cs.AI cs.LG cs.MA cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement learning (RL) has shown promise in creating robust policies for
robotics tasks. However, contemporary RL algorithms are data-hungry, often
requiring billions of environment transitions to train successful policies.
This necessitates the use of fast and highly-parallelizable simulators. In
addition to speed, such simulators need to model the physics of the robots and
their interaction with the environment to a level acceptable for transferring
policies learned in simulation to reality. We present QuadSwarm, a fast,
reliable simulator for research in single and multi-robot RL for quadrotors
that addresses both issues. QuadSwarm, with fast forward-dynamics propagation
decoupled from rendering, is designed to be highly parallelizable such that
throughput scales linearly with additional compute. It provides multiple
components tailored toward multi-robot RL, including diverse training
scenarios, and provides domain randomization to facilitate the development and
sim2real transfer of multi-quadrotor control policies. Initial experiments
suggest that QuadSwarm achieves over 48,500 simulation samples per second (SPS)
on a single quadrotor and over 62,000 SPS on eight quadrotors on a 16-core CPU.
The code can be found in https://github.com/Zhehui-Huang/quad-swarm-rl.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 22:46:20 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Huang",
"Zhehui",
""
],
[
"Batra",
"Sumeet",
""
],
[
"Chen",
"Tao",
""
],
[
"Krupani",
"Rahul",
""
],
[
"Kumar",
"Tushar",
""
],
[
"Molchanov",
"Artem",
""
],
[
"Petrenko",
"Aleksei",
""
],
[
"Preiss",
"James A.",
""
],
[
"Yang",
"Zhaojing",
""
],
[
"Sukhatme",
"Gaurav S.",
""
]
] |
new_dataset
| 0.975423 |
2306.09579
|
Xiaosong Wang
|
Dequan Wang, Xiaosong Wang, Lilong Wang, Mengzhang Li, Qian Da,
Xiaoqiang Liu, Xiangyu Gao, Jun Shen, Junjun He, Tian Shen, Qi Duan, Jie
Zhao, Kang Li, Yu Qiao, Shaoting Zhang
|
MedFMC: A Real-world Dataset and Benchmark For Foundation Model
Adaptation in Medical Image Classification
|
Preprint. Under review
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Foundation models, often pre-trained with large-scale data, have achieved
paramount success in jump-starting various vision and language applications.
Recent advances further enable adapting foundation models in downstream tasks
efficiently using only a few training samples, e.g., in-context learning. Yet,
the application of such learning paradigms in medical image analysis remains
scarce due to the shortage of publicly accessible data and benchmarks. In this
paper, we aim at approaches adapting the foundation models for medical image
classification and present a novel dataset and benchmark for the evaluation,
i.e., examining the overall performance of accommodating the large-scale
foundation models downstream on a set of diverse real-world clinical tasks. We
collect five sets of medical imaging data from multiple institutes targeting a
variety of real-world clinical tasks (22,349 images in total), i.e., thoracic
diseases screening in X-rays, pathological lesion tissue screening, lesion
detection in endoscopy images, neonatal jaundice evaluation, and diabetic
retinopathy grading. Results of multiple baseline methods are demonstrated
using the proposed dataset from both accuracy and cost-effective perspectives.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 01:46:07 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Wang",
"Dequan",
""
],
[
"Wang",
"Xiaosong",
""
],
[
"Wang",
"Lilong",
""
],
[
"Li",
"Mengzhang",
""
],
[
"Da",
"Qian",
""
],
[
"Liu",
"Xiaoqiang",
""
],
[
"Gao",
"Xiangyu",
""
],
[
"Shen",
"Jun",
""
],
[
"He",
"Junjun",
""
],
[
"Shen",
"Tian",
""
],
[
"Duan",
"Qi",
""
],
[
"Zhao",
"Jie",
""
],
[
"Li",
"Kang",
""
],
[
"Qiao",
"Yu",
""
],
[
"Zhang",
"Shaoting",
""
]
] |
new_dataset
| 0.999747 |
2306.09581
|
Muhamad Taufan
|
Muhamad Taufan and I Made Wiryana
|
Pengembangan Domain Specific Language Untuk Pengelolaan Data Warehouse
|
16 pages, in Indonesian language, 8 figures
| null | null | null |
cs.DB cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Efforts to improve the performance of services on the transaction at a bank
can be done by performing data retention, reduce the volume of data in the
database production by cutting the historical data in accordance with the rules
in a bank to a data warehouse. Design and implementation of applications Domain
Specific Language (DSL) for handling the data transfer on the data warehouse is
divided into lexical analysis, syntax analysis, semantic analysis and code
generation. Each part has different characteristics to produce an executable
command. Has been developed an application with the DSL method, which is
beneficial to reduce the error of writing a command for a normal
(non-technical) way to transfer data. From the test result in a decision Oracle
transfer method according to the size scale of a particular data.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 01:55:35 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Taufan",
"Muhamad",
""
],
[
"Wiryana",
"I Made",
""
]
] |
new_dataset
| 0.998784 |
2306.09590
|
Dongming Wu
|
Dongming Wu, Fan Jia, Jiahao Chang, Zhuoling Li, Jianjian Sun, Chunrui
Han, Shuailin Li, Yingfei Liu, Zheng Ge, Tiancai Wang
|
The 1st-place Solution for CVPR 2023 OpenLane Topology in Autonomous
Driving Challenge
|
Accepted by CVPR2023 Workshop
(https://opendrivelab.com/AD23Challenge.html#openlane_topology)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present the 1st-place solution of OpenLane Topology in Autonomous Driving
Challenge. Considering that topology reasoning is based on centerline detection
and traffic element detection, we develop a multi-stage framework for high
performance. Specifically, the centerline is detected by the powerful PETRv2
detector and the popular YOLOv8 is employed to detect the traffic elements.
Further, we design a simple yet effective MLP-based head for topology
prediction. Our method achieves 55\% OLS on the OpenLaneV2 test set, surpassing
the 2nd solution by 8 points.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 02:33:12 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Wu",
"Dongming",
""
],
[
"Jia",
"Fan",
""
],
[
"Chang",
"Jiahao",
""
],
[
"Li",
"Zhuoling",
""
],
[
"Sun",
"Jianjian",
""
],
[
"Han",
"Chunrui",
""
],
[
"Li",
"Shuailin",
""
],
[
"Liu",
"Yingfei",
""
],
[
"Ge",
"Zheng",
""
],
[
"Wang",
"Tiancai",
""
]
] |
new_dataset
| 0.970874 |
2306.09592
|
Yang Li
|
Rui Zhang, Ziqi Wang, Yang Li, Jiabao Wang, Zhiteng Wang
|
FewSAR: A Few-shot SAR Image Classification Benchmark
|
7 pages, 4 figures
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Few-shot learning (FSL) is one of the significant and hard problems in the
field of image classification. However, in contrast to the rapid development of
the visible light dataset, the progress in SAR target image classification is
much slower. The lack of unified benchmark is a key reason for this phenomenon,
which may be severely overlooked by the current literature. The researchers of
SAR target image classification always report their new results on their own
datasets and experimental setup. It leads to inefficiency in result comparison
and impedes the further progress of this area. Motivated by this observation,
we propose a novel few-shot SAR image classification benchmark (FewSAR) to
address this issue. FewSAR consists of an open-source Python code library of 15
classic methods in three categories for few-shot SAR image classification. It
provides an accessible and customizable testbed for different few-shot SAR
image classification task. To further understanding the performance of
different few-shot methods, we establish evaluation protocols and conduct
extensive experiments within the benchmark. By analyzing the quantitative
results and runtime under the same setting, we observe that the accuracy of
metric learning methods can achieve the best results. Meta-learning methods and
fine-tuning methods perform poorly on few-shot SAR images, which is primarily
due to the bias of existing datasets. We believe that FewSAR will open up a new
avenue for future research and development, on real-world challenges at the
intersection of SAR image classification and few-shot deep learning. We will
provide our code for the proposed FewSAR at https://github.com/solarlee/FewSAR.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 02:35:00 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Zhang",
"Rui",
""
],
[
"Wang",
"Ziqi",
""
],
[
"Li",
"Yang",
""
],
[
"Wang",
"Jiabao",
""
],
[
"Wang",
"Zhiteng",
""
]
] |
new_dataset
| 0.999808 |
2306.09593
|
Guangtao Lyu
|
Guangtao Lyu, Kun Liu, Anna Zhu, Seiichi Uchida, Brian Kenji Iwana
|
FETNet: Feature Erasing and Transferring Network for Scene Text Removal
|
Accepted by Pattern Recognition 2023
|
Pattern Recognition 2023
|
10.1016/j.patcog.2023.109531
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The scene text removal (STR) task aims to remove text regions and recover the
background smoothly in images for private information protection. Most existing
STR methods adopt encoder-decoder-based CNNs, with direct copies of the
features in the skip connections. However, the encoded features contain both
text texture and structure information. The insufficient utilization of text
features hampers the performance of background reconstruction in text removal
regions. To tackle these problems, we propose a novel Feature Erasing and
Transferring (FET) mechanism to reconfigure the encoded features for STR in
this paper. In FET, a Feature Erasing Module (FEM) is designed to erase text
features. An attention module is responsible for generating the feature
similarity guidance. The Feature Transferring Module (FTM) is introduced to
transfer the corresponding features in different layers based on the attention
guidance. With this mechanism, a one-stage, end-to-end trainable network called
FETNet is constructed for scene text removal. In addition, to facilitate
research on both scene text removal and segmentation tasks, we introduce a
novel dataset, Flickr-ST, with multi-category annotations. A sufficient number
of experiments and ablation studies are conducted on the public datasets and
Flickr-ST. Our proposed method achieves state-of-the-art performance using most
metrics, with remarkably higher quality scene text removal results. The source
code of our work is available at:
\href{https://github.com/GuangtaoLyu/FETNet}{https://github.com/GuangtaoLyu/FETNet.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 02:38:30 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Lyu",
"Guangtao",
""
],
[
"Liu",
"Kun",
""
],
[
"Zhu",
"Anna",
""
],
[
"Uchida",
"Seiichi",
""
],
[
"Iwana",
"Brian Kenji",
""
]
] |
new_dataset
| 0.987866 |
2306.09613
|
Pha Nguyen
|
Pha Nguyen, Kha Gia Quach, John Gauch, Samee U. Khan, Bhiksha Raj,
Khoa Luu
|
UTOPIA: Unconstrained Tracking Objects without Preliminary Examination
via Cross-Domain Adaptation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiple Object Tracking (MOT) aims to find bounding boxes and identities of
targeted objects in consecutive video frames. While fully-supervised MOT
methods have achieved high accuracy on existing datasets, they cannot
generalize well on a newly obtained dataset or a new unseen domain. In this
work, we first address the MOT problem from the cross-domain point of view,
imitating the process of new data acquisition in practice. Then, a new
cross-domain MOT adaptation from existing datasets is proposed without any
pre-defined human knowledge in understanding and modeling objects. It can also
learn and update itself from the target data feedback. The intensive
experiments are designed on four challenging settings, including MOTSynth to
MOT17, MOT17 to MOT20, MOT17 to VisDrone, and MOT17 to DanceTrack. We then
prove the adaptability of the proposed self-supervised learning strategy. The
experiments also show superior performance on tracking metrics MOTA and IDF1,
compared to fully supervised, unsupervised, and self-supervised
state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 04:06:15 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Nguyen",
"Pha",
""
],
[
"Quach",
"Kha Gia",
""
],
[
"Gauch",
"John",
""
],
[
"Khan",
"Samee U.",
""
],
[
"Raj",
"Bhiksha",
""
],
[
"Luu",
"Khoa",
""
]
] |
new_dataset
| 0.996167 |
2306.09615
|
Yaqi Zhang
|
Yaqi Zhang, Yan Lu, Bin Liu, Zhiwei Zhao, Qi Chu, Nenghai Yu
|
EVOPOSE: A Recursive Transformer For 3D Human Pose Estimation With
Kinematic Structure Priors
|
5 pages, 2 figures, 4 tables, published in the proceedings of IEEE
ICASSP 2023
| null |
10.1109/ICASSP49357.2023.10095302
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformer is popular in recent 3D human pose estimation, which utilizes
long-term modeling to lift 2D keypoints into the 3D space. However, current
transformer-based methods do not fully exploit the prior knowledge of the human
skeleton provided by the kinematic structure. In this paper, we propose a novel
transformer-based model EvoPose to introduce the human body prior knowledge for
3D human pose estimation effectively. Specifically, a Structural Priors
Representation (SPR) module represents human priors as structural features
carrying rich body patterns, e.g. joint relationships. The structural features
are interacted with 2D pose sequences and help the model to achieve more
informative spatiotemporal features. Moreover, a Recursive Refinement (RR)
module is applied to refine the 3D pose outputs by utilizing estimated results
and further injects human priors simultaneously. Extensive experiments
demonstrate the effectiveness of EvoPose which achieves a new state of the art
on two most popular benchmarks, Human3.6M and MPI-INF-3DHP.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 04:09:16 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Zhang",
"Yaqi",
""
],
[
"Lu",
"Yan",
""
],
[
"Liu",
"Bin",
""
],
[
"Zhao",
"Zhiwei",
""
],
[
"Chu",
"Qi",
""
],
[
"Yu",
"Nenghai",
""
]
] |
new_dataset
| 0.99703 |
2306.09626
|
Kian Ming Lim
|
Jia Le Ngwe, Kian Ming Lim, Chin Poo Lee, and Thian Song Ong
|
PAtt-Lite: Lightweight Patch and Attention MobileNet for Challenging
Facial Expression Recognition
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Facial Expression Recognition (FER) is a machine learning problem that deals
with recognizing human facial expressions. While existing work has achieved
performance improvements in recent years, FER in the wild and under challenging
conditions remains a challenge. In this paper, a lightweight patch and
attention network based on MobileNetV1, referred to as PAtt-Lite, is proposed
to improve FER performance under challenging conditions. A truncated
ImageNet-pre-trained MobileNetV1 is utilized as the backbone feature extractor
of the proposed method. In place of the truncated layers is a patch extraction
block that is proposed for extracting significant local facial features to
enhance the representation from MobileNetV1, especially under challenging
conditions. An attention classifier is also proposed to improve the learning of
these patched feature maps from the extremely lightweight feature extractor.
The experimental results on public benchmark databases proved the effectiveness
of the proposed method. PAtt-Lite achieved state-of-the-art results on CK+,
RAF-DB, FER2013, FERPlus, and the challenging conditions subsets for RAF-DB and
FERPlus. The source code for the proposed method will be available at
https://github.com/JLREx/PAtt-Lite.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 04:51:18 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Ngwe",
"Jia Le",
""
],
[
"Lim",
"Kian Ming",
""
],
[
"Lee",
"Chin Poo",
""
],
[
"Ong",
"Thian Song",
""
]
] |
new_dataset
| 0.99977 |
2306.09764
|
Vincent Berenz
|
Vincent Berenz, Felix Widmaier, Simon Guist, Bernhard Sch\"olkopf and
Dieter B\"uchler
|
Synchronizing Machine Learning Algorithms, Realtime Robotic Control and
Simulated Environment with o80
|
work presented at the Robot Software Architectures Workshop - RSA
2023, ICRA
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robotic applications require the integration of various modalities,
encompassing perception, control of real robots and possibly the control of
simulated environments. While the state-of-the-art robotic software solutions
such as ROS 2 provide most of the required features, flexible synchronization
between algorithms, data streams and control loops can be tedious. o80 is a
versatile C++ framework for robotics which provides a shared memory model and a
command framework for real-time critical systems. It enables expert users to
set up complex robotic systems and generate Python bindings for scientists.
o80's unique feature is its flexible synchronization between processes,
including the traditional blocking commands and the novel ``bursting mode'',
which allows user code to control the execution of the lower process control
loop. This makes it particularly useful for setups that mix real and simulated
environments.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 10:50:21 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Berenz",
"Vincent",
""
],
[
"Widmaier",
"Felix",
""
],
[
"Guist",
"Simon",
""
],
[
"Schölkopf",
"Bernhard",
""
],
[
"Büchler",
"Dieter",
""
]
] |
new_dataset
| 0.996732 |
2306.09783
|
Amos Brocco
|
Massimo Coluzzi, Amos Brocco, Alessandro Antonucci, Tiziano Leidi
|
MementoHash: A Stateful, Minimal Memory, Best Performing Consistent Hash
Algorithm
| null | null | null | null |
cs.DC cs.DS cs.NI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Consistent hashing is used in distributed systems and networking applications
to spread data evenly and efficiently across a cluster of nodes. In this paper,
we present MementoHash, a novel consistent hashing algorithm that eliminates
known limitations of state-of-the-art algorithms while keeping optimal
performance and minimal memory usage. We describe the algorithm in detail,
provide a pseudo-code implementation, and formally establish its solid
theoretical guarantees. To measure the efficacy of MementoHash, we compare its
performance, in terms of memory usage and lookup time, to that of
state-of-the-art algorithms, namely, AnchorHash, DxHash, and JumpHash. Unlike
JumpHash, MementoHash can handle random failures. Moreover, MementoHash does
not require fixing the overall capacity of the cluster (as AnchorHash and
DxHash do), allowing it to scale indefinitely. The number of removed nodes
affects the performance of all the considered algorithms. Therefore, we conduct
experiments considering three different scenarios: stable (no removed nodes),
one-shot removals (90% of the nodes removed at once), and incremental removals.
We report experimental results that averaged a varying number of nodes from ten
to one million. Results indicate that our algorithm shows optimal lookup
performance and minimal memory usage in its best-case scenario. It behaves
better than AnchorHash and DxHash in its average-case scenario and at least as
well as those two algorithms in its worst-case scenario. However, the
worst-case scenario for MementoHash occurs when more than 70% of the nodes
fail, which describes a unlikely scenario. Therefore, MementoHash shows the
best performance during the regular life cycle of a cluster.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 11:41:34 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Coluzzi",
"Massimo",
""
],
[
"Brocco",
"Amos",
""
],
[
"Antonucci",
"Alessandro",
""
],
[
"Leidi",
"Tiziano",
""
]
] |
new_dataset
| 0.996784 |
2306.09815
|
Qingsong Xu
|
Qingsong Xu, Yilei Shi, Xiao Xiang Zhu
|
DisasterNets: Embedding Machine Learning in Disaster Mapping
|
4 pages, IEEE IGARSS 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Disaster mapping is a critical task that often requires on-site experts and
is time-consuming. To address this, a comprehensive framework is presented for
fast and accurate recognition of disasters using machine learning, termed
DisasterNets. It consists of two stages, space granulation and attribute
granulation. The space granulation stage leverages supervised/semi-supervised
learning, unsupervised change detection, and domain adaptation with/without
source data techniques to handle different disaster mapping scenarios.
Furthermore, the disaster database with the corresponding geographic
information field properties is built by using the attribute granulation stage.
The framework is applied to earthquake-triggered landslide mapping and
large-scale flood mapping. The results demonstrate a competitive performance
for high-precision, high-efficiency, and cross-scene recognition of disasters.
To bridge the gap between disaster mapping and machine learning communities, we
will provide an openly accessible tool based on DisasterNets. The framework and
tool will be available at https://github.com/HydroPML/DisasterNets.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 12:50:46 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Xu",
"Qingsong",
""
],
[
"Shi",
"Yilei",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
] |
new_dataset
| 0.981623 |
2306.09864
|
Hao Zhu
|
Yifei Zeng, Yuanxun Lu, Xinya Ji, Yao Yao, Hao Zhu, Xun Cao
|
AvatarBooth: High-Quality and Customizable 3D Human Avatar Generation
|
Project website at https://zeng-yifei.github.io/avatarbooth_page/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce AvatarBooth, a novel method for generating high-quality 3D
avatars using text prompts or specific images. Unlike previous approaches that
can only synthesize avatars based on simple text descriptions, our method
enables the creation of personalized avatars from casually captured face or
body images, while still supporting text-based model generation and editing.
Our key contribution is the precise avatar generation control by using dual
fine-tuned diffusion models separately for the human face and body. This
enables us to capture intricate details of facial appearance, clothing, and
accessories, resulting in highly realistic avatar generations. Furthermore, we
introduce pose-consistent constraint to the optimization process to enhance the
multi-view consistency of synthesized head images from the diffusion model and
thus eliminate interference from uncontrolled human poses. In addition, we
present a multi-resolution rendering strategy that facilitates coarse-to-fine
supervision of 3D avatar generation, thereby enhancing the performance of the
proposed system. The resulting avatar model can be further edited using
additional text descriptions and driven by motion sequences. Experiments show
that AvatarBooth outperforms previous text-to-3D methods in terms of rendering
and geometric quality from either text prompts or specific images. Please check
our project website at https://zeng-yifei.github.io/avatarbooth_page/.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 14:18:51 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Zeng",
"Yifei",
""
],
[
"Lu",
"Yuanxun",
""
],
[
"Ji",
"Xinya",
""
],
[
"Yao",
"Yao",
""
],
[
"Zhu",
"Hao",
""
],
[
"Cao",
"Xun",
""
]
] |
new_dataset
| 0.99066 |
2306.09884
|
Cl\'ement Bonnet
|
Cl\'ement Bonnet, Daniel Luo, Donal Byrne, Shikha Surana, Vincent
Coyette, Paul Duckworth, Laurence I. Midgley, Tristan Kalloniatis, Sasha
Abramowitz, Cemlyn N. Waters, Andries P. Smit, Nathan Grinsztajn, Ulrich A.
Mbou Sob, Omayma Mahjoub, Elshadai Tegegn, Mohamed A. Mimouni, Raphael Boige,
Ruan de Kock, Daniel Furelos-Blanco, Victor Le, Arnu Pretorius, Alexandre
Laterre
|
Jumanji: a Diverse Suite of Scalable Reinforcement Learning Environments
in JAX
|
9 pages + 16 pages of appendices and references. Submitted to NeurIPS
2023 Datasets and Benchmarks Track
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open-source reinforcement learning (RL) environments have played a crucial
role in driving progress in the development of AI algorithms. In modern RL
research, there is a need for simulated environments that are performant,
scalable, and modular to enable their utilization in a wider range of potential
real-world applications. Therefore, we present Jumanji, a suite of diverse RL
environments specifically designed to be fast, flexible, and scalable. Jumanji
provides a suite of environments focusing on combinatorial problems frequently
encountered in industry, as well as challenging general decision-making tasks.
By leveraging the efficiency of JAX and hardware accelerators like GPUs and
TPUs, Jumanji enables rapid iteration of research ideas and large-scale
experimentation, ultimately empowering more capable agents. Unlike existing RL
environment suites, Jumanji is highly customizable, allowing users to tailor
the initial state distribution and problem complexity to their needs.
Furthermore, we provide actor-critic baselines for each environment,
accompanied by preliminary findings on scaling and generalization scenarios.
Jumanji aims to set a new standard for speed, adaptability, and scalability of
RL environments.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 14:52:24 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Bonnet",
"Clément",
""
],
[
"Luo",
"Daniel",
""
],
[
"Byrne",
"Donal",
""
],
[
"Surana",
"Shikha",
""
],
[
"Coyette",
"Vincent",
""
],
[
"Duckworth",
"Paul",
""
],
[
"Midgley",
"Laurence I.",
""
],
[
"Kalloniatis",
"Tristan",
""
],
[
"Abramowitz",
"Sasha",
""
],
[
"Waters",
"Cemlyn N.",
""
],
[
"Smit",
"Andries P.",
""
],
[
"Grinsztajn",
"Nathan",
""
],
[
"Sob",
"Ulrich A. Mbou",
""
],
[
"Mahjoub",
"Omayma",
""
],
[
"Tegegn",
"Elshadai",
""
],
[
"Mimouni",
"Mohamed A.",
""
],
[
"Boige",
"Raphael",
""
],
[
"de Kock",
"Ruan",
""
],
[
"Furelos-Blanco",
"Daniel",
""
],
[
"Le",
"Victor",
""
],
[
"Pretorius",
"Arnu",
""
],
[
"Laterre",
"Alexandre",
""
]
] |
new_dataset
| 0.998449 |
2306.09911
|
Diego Kozlowski
|
Diego Kozlowski1, Jens Peter Andersen and Vincent Larivi\`ere
|
Uncited articles and their effect on the concentration of citations
|
17 pages, 8 figures
| null | null | null |
cs.DL cs.CY physics.soc-ph
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Empirical evidence demonstrates that citations received by scholarly
publications follow a pattern of preferential attachment, resulting in a
power-law distribution. Such asymmetry has sparked significant debate regarding
the use of citations for research evaluation. However, a consensus has yet to
be established concerning the historical trends in citation concentration. Are
citations becoming more concentrated in a small number of articles? Or have
recent geopolitical and technical changes in science led to more decentralized
distributions? This ongoing debate stems from a lack of technical clarity in
measuring inequality. Given the variations in citation practices across
disciplines and over time, it is crucial to account for multiple factors that
can influence the findings. This article explores how reference-based and
citation-based approaches, uncited articles, citation inflation, the expansion
of bibliometric databases, disciplinary differences, and self-citations affect
the evolution of citation concentration. Our results indicate a decreasing
trend in citation concentration, primarily driven by a decline in uncited
articles, which, in turn, can be attributed to the growing significance of Asia
and Europe. On the whole, our findings clarify current debates on citation
concentration and show that, contrary to a widely-held belief, citations are
increasingly scattered.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 15:38:12 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Kozlowski1",
"Diego",
""
],
[
"Andersen",
"Jens Peter",
""
],
[
"Larivière",
"Vincent",
""
]
] |
new_dataset
| 0.982503 |
2306.09930
|
Hongwei Jin
|
George Papadimitriou, Hongwei Jin, Cong Wang, Krishnan Raghavan,
Anirban Mandal, Prasanna Balaprakash, Ewa Deelman
|
Flow-Bench: A Dataset for Computational Workflow Anomaly Detection
|
Work under review
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A computational workflow, also known as workflow, consists of tasks that must
be executed in a specific order to attain a specific goal. Often, in fields
such as biology, chemistry, physics, and data science, among others, these
workflows are complex and are executed in large-scale, distributed, and
heterogeneous computing environments that are prone to failures and performance
degradations. Therefore, anomaly detection for workflows is an important
paradigm that aims to identify unexpected behavior or errors in workflow
execution. This crucial task to improve the reliability of workflow executions
must be assisted by machine learning-based techniques. However, such
application is limited, in large part, due to the lack of open datasets and
benchmarking. To address this gap, we make the following contributions in this
paper: (1) we systematically inject anomalies and collect raw execution logs
from workflows executing on distributed infrastructures; (2) we summarize the
statistics of new datasets, as well as a set of open datasets, and provide
insightful analyses; (3) we benchmark unsupervised anomaly detection techniques
by converting workflows into both tabular and graph-structured data. Our
findings allow us to examine the effectiveness and efficiencies of the
benchmark methods and identify potential research opportunities for improvement
and generalization. The dataset and benchmark code are available online with
MIT License for public usage.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 15:59:23 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Papadimitriou",
"George",
""
],
[
"Jin",
"Hongwei",
""
],
[
"Wang",
"Cong",
""
],
[
"Raghavan",
"Krishnan",
""
],
[
"Mandal",
"Anirban",
""
],
[
"Balaprakash",
"Prasanna",
""
],
[
"Deelman",
"Ewa",
""
]
] |
new_dataset
| 0.999837 |
2306.09940
|
Jeovane Hon\'orio Alves
|
Paulo R. Lisboa de Almeida, Jeovane Hon\'orio Alves, Luiz S. Oliveira,
Andre Gustavo Hochuli, Jo\~ao V. Fr\"ohlich, Rodrigo A. Krauel
|
Vehicle Occurrence-based Parking Space Detection
|
Accepted for presentation at the 2023 IEEE International Conference
on Systems, Man, and Cybernetics (SMC 2023)
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Smart-parking solutions use sensors, cameras, and data analysis to improve
parking efficiency and reduce traffic congestion. Computer vision-based methods
have been used extensively in recent years to tackle the problem of parking lot
management, but most of the works assume that the parking spots are manually
labeled, impacting the cost and feasibility of deployment. To fill this gap,
this work presents an automatic parking space detection method, which receives
a sequence of images of a parking lot and returns a list of coordinates
identifying the detected parking spaces. The proposed method employs instance
segmentation to identify cars and, using vehicle occurrence, generate a heat
map of parking spaces. The results using twelve different subsets from the
PKLot and CNRPark-EXT parking lot datasets show that the method achieved an
AP25 score up to 95.60\% and AP50 score up to 79.90\%.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 16:22:45 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"de Almeida",
"Paulo R. Lisboa",
""
],
[
"Alves",
"Jeovane Honório",
""
],
[
"Oliveira",
"Luiz S.",
""
],
[
"Hochuli",
"Andre Gustavo",
""
],
[
"Fröhlich",
"João V.",
""
],
[
"Krauel",
"Rodrigo A.",
""
]
] |
new_dataset
| 0.99359 |
2306.09944
|
Jiajun Wu
|
Samuel Clarke, Ruohan Gao, Mason Wang, Mark Rau, Julia Xu, Jui-Hsien
Wang, Doug L. James, Jiajun Wu
|
RealImpact: A Dataset of Impact Sound Fields for Real Objects
|
CVPR 2023 (Highlight). Project page:
https://samuelpclarke.com/realimpact/
| null | null | null |
cs.SD cs.CV cs.GR eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Objects make unique sounds under different perturbations, environment
conditions, and poses relative to the listener. While prior works have modeled
impact sounds and sound propagation in simulation, we lack a standard dataset
of impact sound fields of real objects for audio-visual learning and
calibration of the sim-to-real gap. We present RealImpact, a large-scale
dataset of real object impact sounds recorded under controlled conditions.
RealImpact contains 150,000 recordings of impact sounds of 50 everyday objects
with detailed annotations, including their impact locations, microphone
locations, contact force profiles, material labels, and RGBD images. We make
preliminary attempts to use our dataset as a reference to current simulation
methods for estimating object impact sounds that match the real world.
Moreover, we demonstrate the usefulness of our dataset as a testbed for
acoustic and audio-visual learning via the evaluation of two benchmark tasks,
including listener location classification and visual acoustic matching.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 16:25:41 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Clarke",
"Samuel",
""
],
[
"Gao",
"Ruohan",
""
],
[
"Wang",
"Mason",
""
],
[
"Rau",
"Mark",
""
],
[
"Xu",
"Julia",
""
],
[
"Wang",
"Jui-Hsien",
""
],
[
"James",
"Doug L.",
""
],
[
"Wu",
"Jiajun",
""
]
] |
new_dataset
| 0.999856 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.