id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.09946
|
Shenli Yuan
|
Shenli Yuan, Shaoxiong Wang, Radhen Patel, Megha Tippur, Connor Yako,
Edward Adelson, Kenneth Salisbury
|
Tactile-Reactive Roller Grasper
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Manipulation of objects within a robot's hand is one of the most important
challenges in achieving robot dexterity. The "Roller Graspers" refers to a
family of non-anthropomorphic hands utilizing motorized, rolling fingertips to
achieve in-hand manipulation. These graspers manipulate grasped objects by
commanding the rollers to exert forces that propel the object in the desired
motion directions. In this paper, we explore the possibility of robot in-hand
manipulation through tactile-guided rolling. We do so by developing the
Tactile-Reactive Roller Grasper (TRRG), which incorporates camera-based tactile
sensing with compliant, steerable cylindrical fingertips, with accompanying
sensor information processing and control strategies. We demonstrated that the
combination of tactile feedback and the actively rolling surfaces enables a
variety of robust in-hand manipulation applications. In addition, we also
demonstrated object reconstruction techniques using tactile-guided rolling. A
controlled experiment was conducted to provide insights on the benefits of
tactile-reactive rollers for manipulation. We considered two manipulation
cases: when the fingers are manipulating purely through rolling and when they
are periodically breaking and reestablishing contact as in regrasping. We found
that tactile-guided rolling can improve the manipulation robustness by allowing
the grasper to perform necessary fine grip adjustments in both manipulation
cases, indicating that hybrid rolling fingertip and finger-gaiting designs may
be a promising research direction.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 16:26:45 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Yuan",
"Shenli",
""
],
[
"Wang",
"Shaoxiong",
""
],
[
"Patel",
"Radhen",
""
],
[
"Tippur",
"Megha",
""
],
[
"Yako",
"Connor",
""
],
[
"Adelson",
"Edward",
""
],
[
"Salisbury",
"Kenneth",
""
]
] |
new_dataset
| 0.997351 |
2306.10012
|
Kai Zhang
|
Kai Zhang, Lingbo Mo, Wenhu Chen, Huan Sun, Yu Su
|
MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image
Editing
|
Website: https://osu-nlp-group.github.io/MagicBrush/
| null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-guided image editing is widely needed in daily life, ranging from
personal use to professional applications such as Photoshop. However, existing
methods are either zero-shot or trained on an automatically synthesized
dataset, which contains a high volume of noise. Thus, they still require lots
of manual tuning to produce desirable outcomes in practice. To address this
issue, we introduce MagicBrush (https://osu-nlp-group.github.io/MagicBrush/),
the first large-scale, manually annotated dataset for instruction-guided real
image editing that covers diverse scenarios: single-turn, multi-turn,
mask-provided, and mask-free editing. MagicBrush comprises over 10K manually
annotated triples (source image, instruction, target image), which supports
trainining large-scale text-guided image editing models. We fine-tune
InstructPix2Pix on MagicBrush and show that the new model can produce much
better images according to human evaluation. We further conduct extensive
experiments to evaluate current image editing baselines from multiple
dimensions including quantitative, qualitative, and human evaluations. The
results reveal the challenging nature of our dataset and the gap between
current baselines and real-world editing needs.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 17:58:58 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Zhang",
"Kai",
""
],
[
"Mo",
"Lingbo",
""
],
[
"Chen",
"Wenhu",
""
],
[
"Sun",
"Huan",
""
],
[
"Su",
"Yu",
""
]
] |
new_dataset
| 0.9998 |
2306.10013
|
Yuqi Wang
|
Yuqi Wang, Yuntao Chen, Xingyu Liao, Lue Fan and Zhaoxiang Zhang
|
PanoOcc: Unified Occupancy Representation for Camera-based 3D Panoptic
Segmentation
|
technical report
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Comprehensive modeling of the surrounding 3D world is key to the success of
autonomous driving. However, existing perception tasks like object detection,
road structure segmentation, depth & elevation estimation, and open-set object
localization each only focus on a small facet of the holistic 3D scene
understanding task. This divide-and-conquer strategy simplifies the algorithm
development procedure at the cost of losing an end-to-end unified solution to
the problem. In this work, we address this limitation by studying camera-based
3D panoptic segmentation, aiming to achieve a unified occupancy representation
for camera-only 3D scene understanding. To achieve this, we introduce a novel
method called PanoOcc, which utilizes voxel queries to aggregate spatiotemporal
information from multi-frame and multi-view images in a coarse-to-fine scheme,
integrating feature learning and scene representation into a unified occupancy
representation. We have conducted extensive ablation studies to verify the
effectiveness and efficiency of the proposed method. Our approach achieves new
state-of-the-art results for camera-based semantic segmentation and panoptic
segmentation on the nuScenes dataset. Furthermore, our method can be easily
extended to dense occupancy prediction and has shown promising performance on
the Occ3D benchmark. The code will be released at
https://github.com/Robertwyq/PanoOcc.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 17:59:33 GMT"
}
] | 2023-06-19T00:00:00 |
[
[
"Wang",
"Yuqi",
""
],
[
"Chen",
"Yuntao",
""
],
[
"Liao",
"Xingyu",
""
],
[
"Fan",
"Lue",
""
],
[
"Zhang",
"Zhaoxiang",
""
]
] |
new_dataset
| 0.964336 |
1904.11200
|
Shan Shen
|
Shan Shen, Tianxiang Shao, Xiaojing Shang, Yichen Guo, Ming Ling, Jun
Yang, Longxing Shi
|
TS Cache: A Fast Cache with Timing-speculation Mechanism Under Low
Supply Voltages
|
The final version in Transaction on VLSI
|
in IEEE Transactions on Very Large Scale Integration (VLSI)
Systems, vol. 28, no. 1, pp. 252-262, Jan. 2020
|
10.1109/TVLSI.2019.2935227
| null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To mitigate the ever-worsening Power Wall problem, more and more applications
need to expand their power supply to the wide-voltage range including the
near-threshold region. However, the read delay distribution of the SRAM cells
under the near-threshold voltage shows a more serious long-tail characteristic
than that under the nominal voltage due to the process fluctuation. Such
degradation of SRAM delay makes the SRAM-based cache a performance bottleneck
of systems as well. To avoid the unreliable data reading, circuit-level studies
use larger/more transistors in a bitcell by scarifying chip area and the static
power of cache arrays. Architectural studies propose the auxiliary error
correction or block disabling/remapping methods in fault-tolerant caches, which
worsen both the hit latency and energy efficiency due to the complex accessing
logic. This paper proposes the Timing-Speculation (TS) cache to boost the cache
frequency and improve energy efficiency under low supply voltages. In the TS
cache, the voltage differences of bitlines are continuously evaluated twice by
a sense amplifier (SA), and the access timing error can be detected much
earlier than that in prior methods. According to the measurement results from
the fabricated chips, the TS L1 cache aggressively increases its frequency to
1.62X and 1.92X compared with the conventional scheme at 0.5V and 0.6V supply
voltages, respectively.
|
[
{
"version": "v1",
"created": "Thu, 25 Apr 2019 08:19:01 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2023 08:24:40 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Shen",
"Shan",
""
],
[
"Shao",
"Tianxiang",
""
],
[
"Shang",
"Xiaojing",
""
],
[
"Guo",
"Yichen",
""
],
[
"Ling",
"Ming",
""
],
[
"Yang",
"Jun",
""
],
[
"Shi",
"Longxing",
""
]
] |
new_dataset
| 0.99439 |
2101.08021
|
Nico Ebert
|
Nico Ebert, Kurt Alexander Ackermann, Bj\"orn Scheppler
|
Bolder is Better: Raising User Awareness through Salient and Concise
Privacy Notices
| null | null |
10.1145/3411764.3445516
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
This paper addresses the question whether the recently proposed approach of
concise privacy notices in apps and on websites is effective in raising user
awareness. To assess the effectiveness in a realistic setting, we included
concise notices in a fictitious but realistic fitness tracking app and asked
participants recruited from an online panel to provide their feedback on the
usability of the app as a cover story. Importantly, after giving feedback,
users were also asked to recall the data practices described in the notices.
The experimental setup included the variation of different levels of saliency
and riskiness of the privacy notices. Based on a total sample of 2,274
participants, our findings indicate that concise privacy notices are indeed a
promising approach to raise user awareness for privacy information when
displayed in a salient way, especially in case the notices describe risky data
practices. Our results may be helpful for regulators, user advocates and
transparency-oriented companies in creating or enforcing better privacy
transparency towards average users that do not read traditional privacy
policies.
|
[
{
"version": "v1",
"created": "Wed, 20 Jan 2021 08:36:04 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Ebert",
"Nico",
""
],
[
"Ackermann",
"Kurt Alexander",
""
],
[
"Scheppler",
"Björn",
""
]
] |
new_dataset
| 0.993388 |
2102.08788
|
Ali Burak \"Unal
|
Ali Burak \"Unal, Nico Pfeifer, Mete Akg\"un
|
ppAURORA: Privacy Preserving Area Under Receiver Operating
Characteristic and Precision-Recall Curves
|
Accepted in NSS-SocialSec 2023
| null | null | null |
cs.LG cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Computing an AUC as a performance measure to compare the quality of different
machine learning models is one of the final steps of many research projects.
Many of these methods are trained on privacy-sensitive data and there are
several different approaches like $\epsilon$-differential privacy, federated
machine learning and cryptography if the datasets cannot be shared or used
jointly at one place for training and/or testing. In this setting, it can also
be a problem to compute the global AUC, since the labels might also contain
privacy-sensitive information. There have been approaches based on
$\epsilon$-differential privacy to address this problem, but to the best of our
knowledge, no exact privacy preserving solution has been introduced. In this
paper, we propose an MPC-based solution, called ppAURORA, with private merging
of individually sorted lists from multiple sources to compute the exact AUC as
one could obtain on the pooled original test samples. With ppAURORA, the
computation of the exact area under precision-recall and receiver operating
characteristic curves is possible even when ties between prediction confidence
values exist. We use ppAURORA to evaluate two different models predicting acute
myeloid leukemia therapy response and heart disease, respectively. We also
assess its scalability via synthetic data experiments. All these experiments
show that we efficiently and privately compute the exact same AUC with both
evaluation metrics as one can obtain on the pooled test samples in plaintext
according to the semi-honest adversary setting.
|
[
{
"version": "v1",
"created": "Wed, 17 Feb 2021 14:30:22 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Jun 2021 12:17:28 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Jun 2023 16:09:19 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Ünal",
"Ali Burak",
""
],
[
"Pfeifer",
"Nico",
""
],
[
"Akgün",
"Mete",
""
]
] |
new_dataset
| 0.96506 |
2106.11626
|
Bal\'azs Ludm\'any
|
Bal\'azs Ludm\'any and Zsolt L\'angi and G\'abor Domokos
|
Morse-Smale complexes on convex polyhedra
|
25 pages, 9 figures
| null | null | null |
cs.CG math.CO math.MG
|
http://creativecommons.org/licenses/by/4.0/
|
Motivated by applications in geomorphology, the aim of this paper is to
extend Morse-Smale theory from smooth functions to the radial distance function
(measured from an internal point), defining a convex polyhedron in
3-dimensional Euclidean space. The resulting polyhedral Morse-Smale complex may
be regarded, on one hand, as a generalization of the Morse-Smale complex of the
smooth radial distance function defining a smooth, convex body, on the other
hand, it could be also regarded as a generalization of the Morse-Smale complex
of the piecewise linear parallel distance function (measured from a plane),
defining a polyhedral surface. Beyond similarities, our paper also highlights
the marked differences between these three problems and it also relates our
theory to other methods. Our work includes the design, implementation and
testing of an explicit algorithm computing the Morse-Smale complex on a convex
polyhedron.
|
[
{
"version": "v1",
"created": "Tue, 22 Jun 2021 09:25:22 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2023 10:21:19 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Ludmány",
"Balázs",
""
],
[
"Lángi",
"Zsolt",
""
],
[
"Domokos",
"Gábor",
""
]
] |
new_dataset
| 0.999037 |
2108.10831
|
Chuang Zhu
|
Xinyu Jia, Chuang Zhu, Minzhen Li, Wenqi Tang, Shengjie Liu, Wenli
Zhou
|
LLVIP: A Visible-infrared Paired Dataset for Low-light Vision
|
10 pages, 11 figures, ICCV workshop
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is very challenging for various visual tasks such as image fusion,
pedestrian detection and image-to-image translation in low light conditions due
to the loss of effective target areas. In this case, infrared and visible
images can be used together to provide both rich detail information and
effective target areas. In this paper, we present LLVIP, a visible-infrared
paired dataset for low-light vision. This dataset contains 30976 images, or
15488 pairs, most of which were taken at very dark scenes, and all of the
images are strictly aligned in time and space. Pedestrians in the dataset are
labeled. We compare the dataset with other visible-infrared datasets and
evaluate the performance of some popular visual algorithms including image
fusion, pedestrian detection and image-to-image translation on the dataset. The
experimental results demonstrate the complementary effect of fusion on image
information, and find the deficiency of existing algorithms of the three visual
tasks in very low-light conditions. We believe the LLVIP dataset will
contribute to the community of computer vision by promoting image fusion,
pedestrian detection and image-to-image translation in very low-light
applications. The dataset is being released in
https://bupt-ai-cz.github.io/LLVIP. Raw data is also provided for further
research such as image registration.
|
[
{
"version": "v1",
"created": "Tue, 24 Aug 2021 16:29:17 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Oct 2021 14:07:00 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Jun 2022 13:10:34 GMT"
},
{
"version": "v4",
"created": "Wed, 14 Jun 2023 12:14:17 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Jia",
"Xinyu",
""
],
[
"Zhu",
"Chuang",
""
],
[
"Li",
"Minzhen",
""
],
[
"Tang",
"Wenqi",
""
],
[
"Liu",
"Shengjie",
""
],
[
"Zhou",
"Wenli",
""
]
] |
new_dataset
| 0.999896 |
2203.08875
|
Yoshitomo Matsubara
|
Yoshitomo Matsubara, Ruihan Yang, Marco Levorato, Stephan Mandt
|
SC2 Benchmark: Supervised Compression for Split Computing
|
Accepted at TMLR. Code and models are available at
https://github.com/yoshitomo-matsubara/sc2-benchmark
| null | null | null |
cs.LG cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the increasing demand for deep learning models on mobile devices,
splitting neural network computation between the device and a more powerful
edge server has become an attractive solution. However, existing split
computing approaches often underperform compared to a naive baseline of remote
computation on compressed data. Recent studies propose learning compressed
representations that contain more relevant information for supervised
downstream tasks, showing improved tradeoffs between compressed data size and
supervised performance. However, existing evaluation metrics only provide an
incomplete picture of split computing. This study introduces supervised
compression for split computing (SC2) and proposes new evaluation criteria:
minimizing computation on the mobile device, minimizing transmitted data size,
and maximizing model accuracy. We conduct a comprehensive benchmark study using
10 baseline methods, three computer vision tasks, and over 180 trained models,
and discuss various aspects of SC2. We also release sc2bench, a Python package
for future research on SC2. Our proposed metrics and package will help
researchers better understand the tradeoffs of supervised compression in split
computing.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 18:43:18 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jun 2023 17:59:07 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Matsubara",
"Yoshitomo",
""
],
[
"Yang",
"Ruihan",
""
],
[
"Levorato",
"Marco",
""
],
[
"Mandt",
"Stephan",
""
]
] |
new_dataset
| 0.9804 |
2203.15380
|
Wei Li
|
Wei Li, Xing Wang, Xin Xia, Jie Wu, Jiashi Li, Xuefeng Xiao, Min
Zheng, Shiping Wen
|
SepViT: Separable Vision Transformer
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision Transformers have witnessed prevailing success in a series of vision
tasks. However, these Transformers often rely on extensive computational costs
to achieve high performance, which is burdensome to deploy on
resource-constrained devices. To alleviate this issue, we draw lessons from
depthwise separable convolution and imitate its ideology to design an efficient
Transformer backbone, i.e., Separable Vision Transformer, abbreviated as
SepViT. SepViT helps to carry out the local-global information interaction
within and among the windows in sequential order via a depthwise separable
self-attention. The novel window token embedding and grouped self-attention are
employed to compute the attention relationship among windows with negligible
cost and establish long-range visual interactions across multiple windows,
respectively. Extensive experiments on general-purpose vision benchmarks
demonstrate that SepViT can achieve a state-of-the-art trade-off between
performance and latency. Among them, SepViT achieves 84.2% top-1 accuracy on
ImageNet-1K classification while decreasing the latency by 40%, compared to the
ones with similar accuracy (e.g., CSWin). Furthermore, SepViT achieves 51.0%
mIoU on ADE20K semantic segmentation task, 47.9 AP on the RetinaNet-based COCO
detection task, 49.4 box AP and 44.6 mask AP on Mask R-CNN-based COCO object
detection and instance segmentation tasks.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 09:20:01 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Apr 2022 01:36:46 GMT"
},
{
"version": "v3",
"created": "Sat, 7 May 2022 08:20:10 GMT"
},
{
"version": "v4",
"created": "Thu, 15 Jun 2023 16:37:26 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Li",
"Wei",
""
],
[
"Wang",
"Xing",
""
],
[
"Xia",
"Xin",
""
],
[
"Wu",
"Jie",
""
],
[
"Li",
"Jiashi",
""
],
[
"Xiao",
"Xuefeng",
""
],
[
"Zheng",
"Min",
""
],
[
"Wen",
"Shiping",
""
]
] |
new_dataset
| 0.998717 |
2204.07657
|
Christian Reilly
|
Oleksandr Ivanov, Karin Molander, Robert Dunne, Stephen Liu, Deena
Brecher, Kevin Masek, Erica Lewis, Lisa Wolf, Debbie Travers, Deb Delaney,
Kyla Montgomery, Christian Reilly
|
Detection of sepsis during emergency department triage using machine
learning
|
25 pages, 2 figure, 3 tables, 4 supplementary tables
| null | null | null |
cs.LG cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Sepsis is a life-threatening condition with organ dysfunction and is a
leading cause of death and critical illness worldwide. Even a few hours of
delay in the treatment of sepsis results in increased mortality. Early
detection of sepsis during emergency department triage would allow early
initiation of lab analysis, antibiotic administration, and other sepsis
treatment protocols. The purpose of this study was to compare sepsis detection
performance at ED triage (prior to the use of laboratory diagnostics) of the
standard sepsis screening algorithm (SIRS with source of infection) and a
machine learning algorithm trained on EHR triage data. A machine learning model
(KATE Sepsis) was developed using patient encounters with triage data from
16participating hospitals. KATE Sepsis and standard screening were
retrospectively evaluated on the adult population of 512,949 medical records.
KATE Sepsis demonstrates an AUC of 0.9423 (0.9401 - 0.9441) with sensitivity of
71.09% (70.12% - 71.98%) and specificity of 94.81% (94.75% - 94.87%). Standard
screening demonstrates an AUC of 0.6826 (0.6774 - 0.6878) with sensitivity of
40.8% (39.71% - 41.86%) and specificity of 95.72% (95.68% - 95.78%). The KATE
Sepsis model trained to detect sepsis demonstrates 77.67% (75.78% -79.42%)
sensitivity in detecting severe sepsis and 86.95% (84.2% - 88.81%) sensitivity
in detecting septic shock. The standard screening protocol demonstrates 43.06%
(41% - 45.87%) sensitivity in detecting severe sepsis and40% (36.55% - 43.26%)
sensitivity in detecting septic shock. Future research should focus on the
prospective impact of KATE Sepsis on administration of antibiotics, readmission
rate, morbidity and mortality.
|
[
{
"version": "v1",
"created": "Fri, 15 Apr 2022 21:57:08 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2022 16:48:34 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Jul 2022 00:47:30 GMT"
},
{
"version": "v4",
"created": "Mon, 7 Nov 2022 20:29:42 GMT"
},
{
"version": "v5",
"created": "Tue, 25 Apr 2023 22:35:06 GMT"
},
{
"version": "v6",
"created": "Thu, 15 Jun 2023 00:57:57 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Ivanov",
"Oleksandr",
""
],
[
"Molander",
"Karin",
""
],
[
"Dunne",
"Robert",
""
],
[
"Liu",
"Stephen",
""
],
[
"Brecher",
"Deena",
""
],
[
"Masek",
"Kevin",
""
],
[
"Lewis",
"Erica",
""
],
[
"Wolf",
"Lisa",
""
],
[
"Travers",
"Debbie",
""
],
[
"Delaney",
"Deb",
""
],
[
"Montgomery",
"Kyla",
""
],
[
"Reilly",
"Christian",
""
]
] |
new_dataset
| 0.976646 |
2205.12386
|
Aidan San
|
Aidan San, Yuan Zhuang, Jan Bakus, Colin Lockard, David Ciemiewicz,
Sandeep Atluri, Yangfeng Ji, Kevin Small, Heba Elfardy
|
PLAtE: A Large-scale Dataset for List Page Web Extraction
|
Accepted to ACL Industry Track 2023
| null | null | null |
cs.CL cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, neural models have been leveraged to significantly improve the
performance of information extraction from semi-structured websites. However, a
barrier for continued progress is the small number of datasets large enough to
train these models. In this work, we introduce the PLAtE (Pages of Lists
Attribute Extraction) benchmark dataset as a challenging new web extraction
task. PLAtE focuses on shopping data, specifically extractions from product
review pages with multiple items encompassing the tasks of: (1) finding
product-list segmentation boundaries and (2) extracting attributes for each
product. PLAtE is composed of 52, 898 items collected from 6, 694 pages and
156, 014 attributes, making it the first largescale list page web extraction
dataset. We use a multi-stage approach to collect and annotate the dataset and
adapt three state-of-the-art web extraction models to the two tasks comparing
their strengths and weaknesses both quantitatively and qualitatively.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 22:26:58 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2023 17:06:49 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"San",
"Aidan",
""
],
[
"Zhuang",
"Yuan",
""
],
[
"Bakus",
"Jan",
""
],
[
"Lockard",
"Colin",
""
],
[
"Ciemiewicz",
"David",
""
],
[
"Atluri",
"Sandeep",
""
],
[
"Ji",
"Yangfeng",
""
],
[
"Small",
"Kevin",
""
],
[
"Elfardy",
"Heba",
""
]
] |
new_dataset
| 0.999844 |
2207.03128
|
Qijian Zhang
|
Qijian Zhang, Junhui Hou, Yue Qian
|
PointMCD: Boosting Deep Point Cloud Encoders via Multi-view Cross-modal
Distillation for 3D Shape Recognition
|
Accepted to TMM
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As two fundamental representation modalities of 3D objects, 3D point clouds
and multi-view 2D images record shape information from different domains of
geometric structures and visual appearances. In the current deep learning era,
remarkable progress in processing such two data modalities has been achieved
through respectively customizing compatible 3D and 2D network architectures.
However, unlike multi-view image-based 2D visual modeling paradigms, which have
shown leading performance in several common 3D shape recognition benchmarks,
point cloud-based 3D geometric modeling paradigms are still highly limited by
insufficient learning capacity, due to the difficulty of extracting
discriminative features from irregular geometric signals. In this paper, we
explore the possibility of boosting deep 3D point cloud encoders by
transferring visual knowledge extracted from deep 2D image encoders under a
standard teacher-student distillation workflow. Generally, we propose PointMCD,
a unified multi-view cross-modal distillation architecture, including a
pretrained deep image encoder as the teacher and a deep point encoder as the
student. To perform heterogeneous feature alignment between 2D visual and 3D
geometric domains, we further investigate visibility-aware feature projection
(VAFP), by which point-wise embeddings are reasonably aggregated into
view-specific geometric descriptors. By pair-wisely aligning multi-view visual
and geometric descriptors, we can obtain more powerful deep point encoders
without exhausting and complicated network modification. Experiments on 3D
shape classification, part segmentation, and unsupervised learning strongly
validate the effectiveness of our method. The code and data will be publicly
available at https://github.com/keeganhk/PointMCD.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2022 07:23:20 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Aug 2022 04:04:04 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Apr 2023 09:44:16 GMT"
},
{
"version": "v4",
"created": "Thu, 15 Jun 2023 06:21:09 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Zhang",
"Qijian",
""
],
[
"Hou",
"Junhui",
""
],
[
"Qian",
"Yue",
""
]
] |
new_dataset
| 0.996713 |
2210.01298
|
Hanzhe Teng
|
Hanzhe Teng, Dimitrios Chatziparaschis, Xinyue Kan, Amit K.
Roy-Chowdhury, Konstantinos Karydis
|
Centroid Distance Keypoint Detector for Colored Point Clouds
|
Accepted to IEEE/CVF Winter Conference on Applications of Computer
Vision (WACV) 2023; copyright will be transferred to IEEE upon publication
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Keypoint detection serves as the basis for many computer vision and robotics
applications. Despite the fact that colored point clouds can be readily
obtained, most existing keypoint detectors extract only geometry-salient
keypoints, which can impede the overall performance of systems that intend to
(or have the potential to) leverage color information. To promote advances in
such systems, we propose an efficient multi-modal keypoint detector that can
extract both geometry-salient and color-salient keypoints in colored point
clouds. The proposed CEntroid Distance (CED) keypoint detector comprises an
intuitive and effective saliency measure, the centroid distance, that can be
used in both 3D space and color space, and a multi-modal non-maximum
suppression algorithm that can select keypoints with high saliency in two or
more modalities. The proposed saliency measure leverages directly the
distribution of points in a local neighborhood and does not require normal
estimation or eigenvalue decomposition. We evaluate the proposed method in
terms of repeatability and computational efficiency (i.e. running time) against
state-of-the-art keypoint detectors on both synthetic and real-world datasets.
Results demonstrate that our proposed CED keypoint detector requires minimal
computational time while attaining high repeatability. To showcase one of the
potential applications of the proposed method, we further investigate the task
of colored point cloud registration. Results suggest that our proposed CED
detector outperforms state-of-the-art handcrafted and learning-based keypoint
detectors in the evaluated scenes. The C++ implementation of the proposed
method is made publicly available at
https://github.com/UCR-Robotics/CED_Detector.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 00:55:51 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2023 04:43:24 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Teng",
"Hanzhe",
""
],
[
"Chatziparaschis",
"Dimitrios",
""
],
[
"Kan",
"Xinyue",
""
],
[
"Roy-Chowdhury",
"Amit K.",
""
],
[
"Karydis",
"Konstantinos",
""
]
] |
new_dataset
| 0.999414 |
2210.10233
|
Samia Sultana
|
Samia Sultana, Boshir Ahmed, Manoranjan Paul, Muhammad Rafiqul Islam
and Shamim Ahmad
|
Vision-Based Robust Lane Detection and Tracking under Different
Challenging Environmental Conditions
|
19 pages, 11 figures, submitted to IEEE Access
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Lane marking detection is fundamental for both advanced driving assistance
systems. However, detecting lane is highly challenging when the visibility of a
road lane marking is low due to real-life challenging environment and adverse
weather. Most of the lane detection methods suffer from four types of
challenges: (i) light effects i.e., shadow, glare of light, reflection etc.;
(ii) Obscured visibility of eroded, blurred, colored and cracked lane caused by
natural disasters and adverse weather; (iii) lane marking occlusion by
different objects from surroundings (wiper, vehicles etc.); and (iv) presence
of confusing lane like lines inside the lane view e.g., guardrails, pavement
marking, road divider etc. Here, we propose a robust lane detection and
tracking method with three key technologies. First, we introduce a
comprehensive intensity threshold range (CITR) to improve the performance of
the canny operator in detecting low intensity lane edges. Second, we propose a
two-step lane verification technique, the angle based geometric constraint
(AGC) and length-based geometric constraint (LGC) followed by Hough Transform,
to verify the characteristics of lane marking and to prevent incorrect lane
detection. Finally, we propose a novel lane tracking technique, by defining a
range of horizontal lane position (RHLP) along the x axis which will be
updating with respect to the lane position of previous frame. It can keep track
of the lane position when either left or right or both lane markings are
partially and fully invisible. To evaluate the performance of the proposed
method we used the DSDLDE [1] and SLD [2] dataset with 1080x1920 and 480x720
resolutions at 24 and 25 frames/sec respectively. Experimental results show
that the average detection rate is 97.55%, and the average processing time is
22.33 msec/frame, which outperform the state of-the-art method.
|
[
{
"version": "v1",
"created": "Wed, 19 Oct 2022 01:25:21 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Jan 2023 09:33:49 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Jun 2023 03:35:28 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Sultana",
"Samia",
""
],
[
"Ahmed",
"Boshir",
""
],
[
"Paul",
"Manoranjan",
""
],
[
"Islam",
"Muhammad Rafiqul",
""
],
[
"Ahmad",
"Shamim",
""
]
] |
new_dataset
| 0.983199 |
2211.04054
|
Juan Pablo Zuluaga-Gomez
|
Juan Zuluaga-Gomez and Karel Vesel\'y and Igor Sz\"oke and Alexander
Blatt and Petr Motlicek and Martin Kocour and Mickael Rigault and Khalid
Choukri and Amrutha Prasad and Seyyed Saeed Sarfjoo and Iuliia Nigmatulina
and Claudia Cevenini and Pavel Kol\v{c}\'arek and Allan Tart and Jan
\v{C}ernock\'y and Dietrich Klakow
|
ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech
Recognition and Natural Language Understanding of Air Traffic Control
Communications
|
Manuscript under review; The code is available at:
https://github.com/idiap/atco2-corpus
| null | null | null |
cs.CL cs.AI cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Personal assistants, automatic speech recognizers and dialogue understanding
systems are becoming more critical in our interconnected digital world. A clear
example is air traffic control (ATC) communications. ATC aims at guiding
aircraft and controlling the airspace in a safe and optimal manner. These
voice-based dialogues are carried between an air traffic controller (ATCO) and
pilots via very-high frequency radio channels. In order to incorporate these
novel technologies into ATC (low-resource domain), large-scale annotated
datasets are required to develop the data-driven AI systems. Two examples are
automatic speech recognition (ASR) and natural language understanding (NLU). In
this paper, we introduce the ATCO2 corpus, a dataset that aims at fostering
research on the challenging ATC field, which has lagged behind due to lack of
annotated data. The ATCO2 corpus covers 1) data collection and pre-processing,
2) pseudo-annotations of speech data, and 3) extraction of ATC-related named
entities. The ATCO2 corpus is split into three subsets. 1) ATCO2-test-set
corpus contains 4 hours of ATC speech with manual transcripts and a subset with
gold annotations for named-entity recognition (callsign, command, value). 2)
The ATCO2-PL-set corpus consists of 5281 hours of unlabeled ATC data enriched
with automatic transcripts from an in-domain speech recognizer, contextual
information, speaker turn information, signal-to-noise ratio estimate and
English language detection score per sample. Both available for purchase
through ELDA at http://catalog.elra.info/en-us/repository/browse/ELRA-S0484. 3)
The ATCO2-test-set-1h corpus is a one-hour subset from the original test set
corpus, that we are offering for free at https://www.atco2.org/data. We expect
the ATCO2 corpus will foster research on robust ASR and NLU not only in the
field of ATC communications but also in the general research community.
|
[
{
"version": "v1",
"created": "Tue, 8 Nov 2022 07:26:45 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2023 13:53:05 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Zuluaga-Gomez",
"Juan",
""
],
[
"Veselý",
"Karel",
""
],
[
"Szöke",
"Igor",
""
],
[
"Blatt",
"Alexander",
""
],
[
"Motlicek",
"Petr",
""
],
[
"Kocour",
"Martin",
""
],
[
"Rigault",
"Mickael",
""
],
[
"Choukri",
"Khalid",
""
],
[
"Prasad",
"Amrutha",
""
],
[
"Sarfjoo",
"Seyyed Saeed",
""
],
[
"Nigmatulina",
"Iuliia",
""
],
[
"Cevenini",
"Claudia",
""
],
[
"Kolčárek",
"Pavel",
""
],
[
"Tart",
"Allan",
""
],
[
"Černocký",
"Jan",
""
],
[
"Klakow",
"Dietrich",
""
]
] |
new_dataset
| 0.999806 |
2211.14568
|
Jihoon Ko
|
Jihoon Ko, Shinhwan Kang, Taehyung Kwon, Heechan Moon, and Kijung Shin
|
BeGin: Extensive Benchmark Scenarios and An Easy-to-use Framework for
Graph Continual Learning
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Continual Learning (CL) is the process of learning ceaselessly a sequence of
tasks. Most existing CL methods deal with independent data (e.g., images and
text) for which many benchmark frameworks and results under standard
experimental settings are available. However, CL methods for graph data (graph
CL) are surprisingly underexplored because of (a) the lack of standard
experimental settings, especially regarding how to deal with the dependency
between instances, (b) the lack of benchmark datasets and scenarios, and (c)
high complexity in implementation and evaluation due to the dependency. In this
paper, regarding (a), we define four standard incremental settings (task-,
class-, domain-, and time-incremental) for graph data, which are naturally
applied to many node-, link-, and graph-level problems. Regarding (b), we
provide 25 benchmark scenarios based on 15 real-world graphs. Regarding (c), we
develop BeGin, an easy and fool-proof framework for graph CL. BeGin is easily
extended since it is modularized with reusable modules for data processing,
algorithm design, and evaluation. Especially, the evaluation module is
completely separated from user code to eliminate potential mistakes. Using all
the above, we report extensive benchmark results of 10 graph CL methods.
Compared to the latest benchmark for graph CL, using BeGin, we cover 3x more
combinations of incremental settings and levels of problems. All assets for the
benchmark framework are available at https://github.com/ShinhwanKang/BeGin.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 13:48:05 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2023 16:29:36 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Ko",
"Jihoon",
""
],
[
"Kang",
"Shinhwan",
""
],
[
"Kwon",
"Taehyung",
""
],
[
"Moon",
"Heechan",
""
],
[
"Shin",
"Kijung",
""
]
] |
new_dataset
| 0.985198 |
2304.03253
|
Celeste Barnaby
|
Celeste Barnaby, Qiaochu Chen, Roopsha Samanta, Isil Dillig
|
ImageEye: Batch Image Processing Using Program Synthesis
| null | null |
10.1145/3591248
| null |
cs.PL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a new synthesis-based approach for batch image
processing. Unlike existing tools that can only apply global edits to the
entire image, our method can apply fine-grained edits to individual objects
within the image. For example, our method can selectively blur or crop specific
objects that have a certain property. To facilitate such fine-grained image
editing tasks, we propose a neuro-symbolic domain-specific language (DSL) that
combines pre-trained neural networks for image classification with other
language constructs that enable symbolic reasoning. Our method can
automatically learn programs in this DSL from user demonstrations by utilizing
a novel synthesis algorithm. We have implemented the proposed technique in a
tool called ImageEye and evaluated it on 50 image editing tasks. Our evaluation
shows that ImageEye is able to automate 96% of these tasks.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 17:38:34 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Apr 2023 00:36:54 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Jun 2023 17:28:27 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Barnaby",
"Celeste",
""
],
[
"Chen",
"Qiaochu",
""
],
[
"Samanta",
"Roopsha",
""
],
[
"Dillig",
"Isil",
""
]
] |
new_dataset
| 0.977644 |
2304.08252
|
Nhat Hao Truong
|
Nhat Hao Truong, Huu Thien Mai, Tuan Anh Tran, Minh Quang Tran, Duc
Duy Nguyen, Ngoc Viet Phuong Pham
|
PaaS: Planning as a Service for reactive driving in CARLA Leaderboard
|
accepted on 05.06.2023, revised on 15.06.2023, to be published on
ICSSE 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
End-to-end deep learning approaches has been proven to be efficient in
autonomous driving and robotics. By using deep learning techniques for
decision-making, those systems are often referred to as a black box, and the
result is driven by data. In this paper, we propose PaaS (Planning as a
Service), a vanilla module to generate local trajectory planning for autonomous
driving in CARLA simulation. Our method is submitted in International CARLA
Autonomous Driving Leaderboard (CADL), which is a platform to evaluate the
driving proficiency of autonomous agents in realistic traffic scenarios. Our
approach focuses on reactive planning in Frenet frame under complex urban
street's constraints and driver's comfort. The planner generates a collection
of feasible trajectories, leveraging heuristic cost functions with controllable
driving style factor to choose the optimal-control path that satisfies safe
travelling criteria. PaaS can provide sufficient solutions to handle well under
challenging traffic situations in CADL. As the strict evaluation in CADL Map
Track, our approach ranked 3rd out of 9 submissions regarding the measure of
driving score. However, with the focus on minimizing the risk of maneuver and
ensuring passenger safety, our figures corresponding to infraction penalty
dominate the two leading submissions for 20 percent.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 13:14:03 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Apr 2023 11:22:57 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Jun 2023 22:40:22 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Truong",
"Nhat Hao",
""
],
[
"Mai",
"Huu Thien",
""
],
[
"Tran",
"Tuan Anh",
""
],
[
"Tran",
"Minh Quang",
""
],
[
"Nguyen",
"Duc Duy",
""
],
[
"Pham",
"Ngoc Viet Phuong",
""
]
] |
new_dataset
| 0.9989 |
2304.11837
|
Yao Su
|
Yao Su, Pengkang Yu, Matthew J. Gerber, Lecheng Ruan, Tsu-Chin Tsao
|
Fault-tolerant Control of an Over-actuated UAV Platform Built on
Quadcopters and Passive Hinges
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Propeller failure is a major cause of multirotor Unmanned Aerial Vehicles
(UAVs) crashes. While conventional multirotor systems struggle to address this
issue due to underactuation, over-actuated platforms can continue flying with
appropriate fault-tolerant control (FTC). This paper presents a robust FTC
controller for an over-actuated UAV platform composed of quadcopters mounted on
passive joints, offering input redundancy at both the high-level vehicle
control and the low-level quadcopter control of vectored thrusts. To maximize
the benefits of input redundancy during propeller failure, the proposed FTC
controller features a hierarchical control architecture with three key
components: (i) a low-level adjustment strategy to prevent propeller-level
thrust saturation; (ii) a compensation loop for mitigating introduced
disturbances; (iii) a nullspace-based control allocation framework to avoid
quadcopter-level thrust saturation. Through reallocating actuator inputs in
both the low-level and high-level control loops, the low-level quadcopter
control can be maintained with up to two failed propellers, ensuring that the
whole platform remains stable and avoids crashing. The proposed controller's
superior performance is thoroughly examined through simulations and real-world
experiments.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 06:05:24 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jun 2023 05:47:32 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Su",
"Yao",
""
],
[
"Yu",
"Pengkang",
""
],
[
"Gerber",
"Matthew J.",
""
],
[
"Ruan",
"Lecheng",
""
],
[
"Tsao",
"Tsu-Chin",
""
]
] |
new_dataset
| 0.998223 |
2304.11906
|
Hanqing Sun
|
Hanqing Sun, Yanwei Pang, Jiale Cao, Jin Xie, Xuelong Li
|
Transformer-based stereo-aware 3D object detection from binocular images
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Vision Transformers have shown promising progress in various object detection
tasks, including monocular 2D/3D detection and surround-view 3D detection.
However, when used in essential and classic stereo 3D object detection,
directly adopting those surround-view Transformers leads to slow convergence
and significant precision drops. We argue that one of the causes of this defect
is that the surround-view Transformers do not consider the stereo-specific
image correspondence information. In a surround-view system, the overlapping
areas are small, and thus correspondence is not a primary issue. In this paper,
we explore the model design of vision Transformers in stereo 3D object
detection, focusing particularly on extracting and encoding the task-specific
image correspondence information. To achieve this goal, we present TS3D, a
Transformer-based Stereo-aware 3D object detector. In the TS3D, a
Disparity-Aware Positional Encoding (DAPE) model is proposed to embed the image
correspondence information into stereo features. The correspondence is encoded
as normalized disparity and is used in conjunction with sinusoidal 2D
positional encoding to provide the location information of the 3D scene. To
extract enriched multi-scale stereo features, we propose a Stereo Reserving
Feature Pyramid Network (SRFPN). The SRFPN is designed to reserve the
correspondence information while fusing intra-scale and aggregating cross-scale
stereo features. Our proposed TS3D achieves a 41.29% Moderate Car detection
average precision on the KITTI test set and takes 88 ms to detect objects from
each binocular image pair. It is competitive with advanced counterparts in
terms of both precision and inference speed.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 08:29:45 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2023 01:56:53 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Sun",
"Hanqing",
""
],
[
"Pang",
"Yanwei",
""
],
[
"Cao",
"Jiale",
""
],
[
"Xie",
"Jin",
""
],
[
"Li",
"Xuelong",
""
]
] |
new_dataset
| 0.974699 |
2304.14365
|
Xiaoyu Tian
|
Xiaoyu Tian, Tao Jiang, Longfei Yun, Yucheng Mao, Huitong Yang, Yue
Wang, Yilun Wang, Hang Zhao
|
Occ3D: A Large-Scale 3D Occupancy Prediction Benchmark for Autonomous
Driving
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robotic perception requires the modeling of both 3D geometry and semantics.
Existing methods typically focus on estimating 3D bounding boxes, neglecting
finer geometric details and struggling to handle general, out-of-vocabulary
objects. 3D occupancy prediction, which estimates the detailed occupancy states
and semantics of a scene, is an emerging task to overcome these limitations. To
support 3D occupancy prediction, we develop a label generation pipeline that
produces dense, visibility-aware labels for any given scene. This pipeline
comprises three stages: voxel densification, occlusion reasoning, and
image-guided voxel refinement. We establish two benchmarks, derived from the
Waymo Open Dataset and the nuScenes Dataset, namely Occ3D-Waymo and
Occ3D-nuScenes benchmarks. Furthermore, we provide an extensive analysis of the
proposed dataset with various baseline models. Lastly, we propose a new model,
dubbed Coarse-to-Fine Occupancy (CTF-Occ) network, which demonstrates superior
performance on the Occ3D benchmarks. The code, data, and benchmarks are
released at https://tsinghua-mars-lab.github.io/Occ3D/.
|
[
{
"version": "v1",
"created": "Thu, 27 Apr 2023 17:40:08 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2023 17:53:43 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Tian",
"Xiaoyu",
""
],
[
"Jiang",
"Tao",
""
],
[
"Yun",
"Longfei",
""
],
[
"Mao",
"Yucheng",
""
],
[
"Yang",
"Huitong",
""
],
[
"Wang",
"Yue",
""
],
[
"Wang",
"Yilun",
""
],
[
"Zhao",
"Hang",
""
]
] |
new_dataset
| 0.997687 |
2304.14924
|
Sourav Ghosh
|
Shuvadeep Masanta, Ramyashree Pramanik, Sourav Ghosh, Tanmay
Bhattacharya
|
An Edge Assisted Robust Smart Traffic Management and Signalling System
for Guiding Emergency Vehicles During Peak Hours
|
Accepted at the Doctoral Symposium on Human Centered Computing (HUMAN
2023), February 25, 2023. To be published in Springer Tracts in
Human-Centered Computing, Book Title: Intelligent Human Centered Computing;
see https://link.springer.com/book/9789819934775
|
Intelligent Human Centered Computing. Human 2023. Springer Tracts
in Human-Centered Computing. Springer, Singapore. 2023. pp. 337-346
|
10.1007/978-981-99-3478-2_29
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Congestion in traffic is an unavoidable circumstance in many cities in India
and other countries. It is an issue of major concern. The steep rise in the
number of automobiles on the roads followed by old infrastructure, accidents,
pedestrian traffic, and traffic rule violations all add to challenging traffic
conditions. Given these poor conditions of traffic, there is a critical need
for automatically detecting and signaling systems. There are already various
technologies that are used for traffic management and signaling systems like
video analysis, infrared sensors, and wireless sensors. The main issue with
these methods is they are very costly and high maintenance is required. In this
paper, we have proposed a three-phase system that can guide emergency vehicles
and manage traffic based on the degree of congestion. In the first phase, the
system processes the captured images and calculates the Index value which is
used to discover the degree of congestion. The Index value of a particular road
depends on its width and the length up to which the camera captures images of
that road. We have to take input for the parameters (length and width) while
setting up the system. In the second phase, the system checks whether there are
any emergency vehicles present or not in any lane. In the third phase, the
whole processing and decision-making part is performed at the edge server. The
proposed model is robust and it takes into consideration adverse weather
conditions such as hazy, foggy, and windy. It works very efficiently in low
light conditions also. The edge server is a strategically placed server that
provides us with low latency and better connectivity. Using Edge technology in
this traffic management system reduces the strain on cloud servers and the
system becomes more reliable in real-time because the latency and bandwidth get
reduced due to processing at the intermediate edge server.
|
[
{
"version": "v1",
"created": "Wed, 26 Apr 2023 15:31:38 GMT"
},
{
"version": "v2",
"created": "Tue, 2 May 2023 11:32:15 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Masanta",
"Shuvadeep",
""
],
[
"Pramanik",
"Ramyashree",
""
],
[
"Ghosh",
"Sourav",
""
],
[
"Bhattacharya",
"Tanmay",
""
]
] |
new_dataset
| 0.999037 |
2305.01082
|
Sanat Sharma
|
Sanat Sharma, Josep Valls-Vargas, Tracy Holloway King, Francois
Guerin, Chirag Arora
|
Contextual Multilingual Spellchecker for User Queries
|
5 pages, In Proceedings of the 46th International ACM SIGIR
Conference on Research and Development in Information Retrieval (SIGIR '23)
| null |
10.1145/3539618.3591861
| null |
cs.CL cs.IR cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Spellchecking is one of the most fundamental and widely used search features.
Correcting incorrectly spelled user queries not only enhances the user
experience but is expected by the user. However, most widely available
spellchecking solutions are either lower accuracy than state-of-the-art
solutions or too slow to be used for search use cases where latency is a key
requirement. Furthermore, most innovative recent architectures focus on English
and are not trained in a multilingual fashion and are trained for spell
correction in longer text, which is a different paradigm from spell correction
for user queries, where context is sparse (most queries are 1-2 words long).
Finally, since most enterprises have unique vocabularies such as product names,
off-the-shelf spelling solutions fall short of users' needs. In this work, we
build a multilingual spellchecker that is extremely fast and scalable and that
adapts its vocabulary and hence speller output based on a specific product's
needs. Furthermore, our speller out-performs general purpose spellers by a wide
margin on in-domain datasets. Our multilingual speller is used in search in
Adobe products, powering autocomplete in various applications.
|
[
{
"version": "v1",
"created": "Mon, 1 May 2023 20:29:59 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jun 2023 14:29:58 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Sharma",
"Sanat",
""
],
[
"Valls-Vargas",
"Josep",
""
],
[
"King",
"Tracy Holloway",
""
],
[
"Guerin",
"Francois",
""
],
[
"Arora",
"Chirag",
""
]
] |
new_dataset
| 0.992529 |
2305.01863
|
Eason Chen
|
Eason Chen, Ray Huang, Han-Shin Chen, Yuen-Hsien Tseng, and Liang-Yi
Li
|
GPTutor: a ChatGPT-powered programming tool for code explanation
|
6 pages. International Conference on Artificial Intelligence in
Education 2023
| null | null | null |
cs.HC cs.AI cs.CL cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Learning new programming skills requires tailored guidance. With the
emergence of advanced Natural Language Generation models like the ChatGPT API,
there is now a possibility of creating a convenient and personalized tutoring
system with AI for computer science education. This paper presents GPTutor, a
ChatGPT-powered programming tool, which is a Visual Studio Code extension using
the ChatGPT API to provide programming code explanations. By integrating Visual
Studio Code API, GPTutor can comprehensively analyze the provided code by
referencing the relevant source codes. As a result, GPTutor can use designed
prompts to explain the selected code with a pop-up message. GPTutor is now
published at the Visual Studio Code Extension Marketplace, and its source code
is openly accessible on GitHub. Preliminary evaluation indicates that GPTutor
delivers the most concise and accurate explanations compared to vanilla ChatGPT
and GitHub Copilot. Moreover, the feedback from students and teachers indicated
that GPTutor is user-friendly and can explain given codes satisfactorily.
Finally, we discuss possible future research directions for GPTutor. This
includes enhancing its performance and personalization via further prompt
programming, as well as evaluating the effectiveness of GPTutor with real
users.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 02:30:13 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2023 07:06:55 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Chen",
"Eason",
""
],
[
"Huang",
"Ray",
""
],
[
"Chen",
"Han-Shin",
""
],
[
"Tseng",
"Yuen-Hsien",
""
],
[
"Li",
"Liang-Yi",
""
]
] |
new_dataset
| 0.995379 |
2305.07498
|
Jianfeng Kuang
|
Jianfeng Kuang, Wei Hua, Dingkang Liang, Mingkun Yang, Deqiang Jiang,
Bo Ren, and Xiang Bai
|
Visual Information Extraction in the Wild: Practical Dataset and
End-to-end Solution
|
15 pages, 6 figures, ICDAR2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual information extraction (VIE), which aims to simultaneously perform OCR
and information extraction in a unified framework, has drawn increasing
attention due to its essential role in various applications like understanding
receipts, goods, and traffic signs. However, as existing benchmark datasets for
VIE mainly consist of document images without the adequate diversity of layout
structures, background disturbs, and entity categories, they cannot fully
reveal the challenges of real-world applications. In this paper, we propose a
large-scale dataset consisting of camera images for VIE, which contains not
only the larger variance of layout, backgrounds, and fonts but also much more
types of entities. Besides, we propose a novel framework for end-to-end VIE
that combines the stages of OCR and information extraction in an end-to-end
learning fashion. Different from the previous end-to-end approaches that
directly adopt OCR features as the input of an information extraction module,
we propose to use contrastive learning to narrow the semantic gap caused by the
difference between the tasks of OCR and information extraction. We evaluate the
existing end-to-end methods for VIE on the proposed dataset and observe that
the performance of these methods has a distinguishable drop from SROIE (a
widely used English dataset) to our proposed dataset due to the larger variance
of layout and entities. These results demonstrate our dataset is more practical
for promoting advanced VIE algorithms. In addition, experiments demonstrate
that the proposed VIE method consistently achieves the obvious performance
gains on the proposed and SROIE datasets.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 14:11:47 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2023 03:31:12 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Kuang",
"Jianfeng",
""
],
[
"Hua",
"Wei",
""
],
[
"Liang",
"Dingkang",
""
],
[
"Yang",
"Mingkun",
""
],
[
"Jiang",
"Deqiang",
""
],
[
"Ren",
"Bo",
""
],
[
"Bai",
"Xiang",
""
]
] |
new_dataset
| 0.999493 |
2305.08144
|
Danyang Zhang
|
Danyang Zhang, Lu Chen, Zihan Zhao, Ruisheng Cao, Kai Yu
|
Mobile-Env: An Evaluation Platform and Benchmark for Interactive Agents
in LLM Era
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Diverse evaluation benchmarks play a crucial role to assess a wide range of
capabilities of large language models (LLM). Although plenty of endeavors have
been dedicated to building valuable benchmarks, there is still little work
aiming at evaluating the capability of LLM in multistep interactive
environments. Noticing that LLM requires a text representation of the
environment observations for interaction, we choose to fill such a blank by
building a novel benchmark based on the information user interface (InfoUI).
InfoUI consists of rich text contents and can be represented in some text
formats, thus is suitable for the assessment of interaction ability of LLM.
Additionally, the complex structures of InfoUI can further raise a challenge
for LLM to understand structured texts rather than plain texts. An interaction
platform is always used to evaluate an agent, however, there is still a lack of
a satisfactory interaction platform dedicated to InfoUI. Consequently, we
propose to build a novel easily-extendable, adaptable, and close-to-reality
interaction platform, Mobile-Env, to provide a base for an appropriate
benchmark. Based on Mobile-Env, an InfoUI task set WikiHow is then built to
establish a benchmark for the multistep interaction capability of LLM in
structured text-based environments. Agents based on a series of LLMs are tested
on the task set to obtain an insight into the potential and challenge of LLM
for InfoUI interaction. It is sincerely welcome that the community contribute
new environments and new task sets for Mobile-Env to provide better test
benchmarks and facilitate the development of the corresponding domains.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2023 12:31:03 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jun 2023 09:20:46 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Zhang",
"Danyang",
""
],
[
"Chen",
"Lu",
""
],
[
"Zhao",
"Zihan",
""
],
[
"Cao",
"Ruisheng",
""
],
[
"Yu",
"Kai",
""
]
] |
new_dataset
| 0.999544 |
2305.08989
|
Yang Liu
|
Yang Liu, Maxence Boels, Luis C. Garcia-Peraza-Herrera, Tom
Vercauteren, Prokar Dasgupta, Alejandro Granados and Sebastien Ourselin
|
LoViT: Long Video Transformer for Surgical Phase Recognition
|
Code link: https://github.com/MRUIL/LoViT
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Online surgical phase recognition plays a significant role towards building
contextual tools that could quantify performance and oversee the execution of
surgical workflows. Current approaches are limited since they train spatial
feature extractors using frame-level supervision that could lead to incorrect
predictions due to similar frames appearing at different phases, and poorly
fuse local and global features due to computational constraints which can
affect the analysis of long videos commonly encountered in surgical
interventions. In this paper, we present a two-stage method, called Long Video
Transformer (LoViT) for fusing short- and long-term temporal information that
combines a temporally-rich spatial feature extractor and a multi-scale temporal
aggregator consisting of two cascaded L-Trans modules based on self-attention,
followed by a G-Informer module based on ProbSparse self-attention for
processing global temporal information. The multi-scale temporal head then
combines local and global features and classifies surgical phases using phase
transition-aware supervision. Our approach outperforms state-of-the-art methods
on the Cholec80 and AutoLaparo datasets consistently. Compared to Trans-SVNet,
LoViT achieves a 2.4 pp (percentage point) improvement in video-level accuracy
on Cholec80 and a 3.1 pp improvement on AutoLaparo. Moreover, it achieves a 5.3
pp improvement in phase-level Jaccard on AutoLaparo and a 1.55 pp improvement
on Cholec80. Our results demonstrate the effectiveness of our approach in
achieving state-of-the-art performance of surgical phase recognition on two
datasets of different surgical procedures and temporal sequencing
characteristics whilst introducing mechanisms that cope with long videos.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 20:06:14 GMT"
},
{
"version": "v2",
"created": "Thu, 18 May 2023 12:42:44 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Jun 2023 16:40:08 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Liu",
"Yang",
""
],
[
"Boels",
"Maxence",
""
],
[
"Garcia-Peraza-Herrera",
"Luis C.",
""
],
[
"Vercauteren",
"Tom",
""
],
[
"Dasgupta",
"Prokar",
""
],
[
"Granados",
"Alejandro",
""
],
[
"Ourselin",
"Sebastien",
""
]
] |
new_dataset
| 0.997856 |
2305.11015
|
Daniel Hausmann
|
Oliver G\"orlitz, Daniel Hausmann, Merlin Humml, Dirk Pattinson, Simon
Prucker, Lutz Schr\"oder
|
COOL 2 -- A Generic Reasoner for Modal Fixpoint Logics
|
Final version (corrected slight mistake in Rabin-type formula series)
| null | null | null |
cs.LO cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
There is a wide range of modal logics whose semantics goes beyond relational
structures, and instead involves, e.g., probabilities, multi-player games,
weights, or neighbourhood structures. Coalgebraic logic serves as a unifying
semantic and algorithmic framework for such logics. It provides uniform
reasoning algorithms that are easily instantiated to particular, concretely
given logics. The COOL 2 reasoner provides an implementation of such generic
algorithms for coalgebraic modal fixpoint logics. As concrete instances, we
obtain in particular reasoners for the aconjunctive and alternation-free
fragments of the graded $\mu$-calculus and the alternating-time $\mu$-calculus.
We evaluate the tool on standard benchmark sets for fixpoint-free graded modal
logic and alternating-time temporal logic (ATL), as well as on a dedicated set
of benchmarks for the graded $\mu$-calculus.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 14:48:38 GMT"
},
{
"version": "v2",
"created": "Wed, 24 May 2023 12:46:42 GMT"
},
{
"version": "v3",
"created": "Fri, 26 May 2023 07:12:42 GMT"
},
{
"version": "v4",
"created": "Thu, 15 Jun 2023 11:41:59 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Görlitz",
"Oliver",
""
],
[
"Hausmann",
"Daniel",
""
],
[
"Humml",
"Merlin",
""
],
[
"Pattinson",
"Dirk",
""
],
[
"Prucker",
"Simon",
""
],
[
"Schröder",
"Lutz",
""
]
] |
new_dataset
| 0.999598 |
2305.14041
|
Alireza Darvishy
|
Alireza Darvishy, Rolf Sethe, Ines Engler, Oriane Pierres, Juliet
Manning
|
The state of scientific PDF accessibility in repositories: A survey in
Switzerland
|
We need to modify this paper and make some extensions before
re-uploading
| null | null | null |
cs.DL cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This survey analyzed the quality of the PDF documents on online repositories
in Switzerland, examining their accessibility for people with visual
impairments. Two minimal accessibility features were analyzed: the PDFs had to
have tags and a hierarchical heading structure. The survey also included
interviews with the managers or heads of multiple Swiss universities'
repositories to assess the general opinion and knowledge of PDF accessibility.
An analysis of interviewee responses indicates an overall lack of awareness of
PDF accessibility, and showed that online repositories currently have no
concrete plans to address the issue. This paper concludes by presenting a set
of recommendations for online repositories to improve the accessibility of
their PDF documents.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 13:13:35 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jun 2023 08:45:14 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Darvishy",
"Alireza",
""
],
[
"Sethe",
"Rolf",
""
],
[
"Engler",
"Ines",
""
],
[
"Pierres",
"Oriane",
""
],
[
"Manning",
"Juliet",
""
]
] |
new_dataset
| 0.953111 |
2305.14293
|
Chenxi Whitehouse
|
Chenxi Whitehouse, Clara Vania, Alham Fikri Aji, Christos
Christodoulopoulos, Andrea Pierleoni
|
WebIE: Faithful and Robust Information Extraction on the Web
|
ACL 2023 Main Conference
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Extracting structured and grounded fact triples from raw text is a
fundamental task in Information Extraction (IE). Existing IE datasets are
typically collected from Wikipedia articles, using hyperlinks to link entities
to the Wikidata knowledge base. However, models trained only on Wikipedia have
limitations when applied to web domains, which often contain noisy text or text
that does not have any factual information. We present WebIE, the first
large-scale, entity-linked closed IE dataset consisting of 1.6M sentences
automatically collected from the English Common Crawl corpus. WebIE also
includes negative examples, i.e. sentences without fact triples, to better
reflect the data on the web. We annotate ~21K triples from WebIE through
crowdsourcing and introduce mWebIE, a translation of the annotated set in four
other languages: French, Spanish, Portuguese, and Hindi. We evaluate the
in-domain, out-of-domain, and zero-shot cross-lingual performance of generative
IE models and find models trained on WebIE show better generalisability. We
also propose three training strategies that use entity linking as an auxiliary
task. Our experiments show that adding Entity-Linking objectives improves the
faithfulness of our generative IE models.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 17:37:53 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2023 13:51:36 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Whitehouse",
"Chenxi",
""
],
[
"Vania",
"Clara",
""
],
[
"Aji",
"Alham Fikri",
""
],
[
"Christodoulopoulos",
"Christos",
""
],
[
"Pierleoni",
"Andrea",
""
]
] |
new_dataset
| 0.998326 |
2305.15814
|
Yash Madhani
|
Yash Madhani, Mitesh M. Khapra, Anoop Kunchukuttan
|
Bhasha-Abhijnaanam: Native-script and romanized Language Identification
for 22 Indic languages
| null | null | null | null |
cs.CL
|
http://creativecommons.org/publicdomain/zero/1.0/
|
We create publicly available language identification (LID) datasets and
models in all 22 Indian languages listed in the Indian constitution in both
native-script and romanized text. First, we create Bhasha-Abhijnaanam, a
language identification test set for native-script as well as romanized text
which spans all 22 Indic languages. We also train IndicLID, a language
identifier for all the above-mentioned languages in both native and romanized
script. For native-script text, it has better language coverage than existing
LIDs and is competitive or better than other LIDs. IndicLID is the first LID
for romanized text in Indian languages. Two major challenges for romanized text
LID are the lack of training data and low-LID performance when languages are
similar. We provide simple and effective solutions to these problems. In
general, there has been limited work on romanized text in any language, and our
findings are relevant to other languages that need romanized language
identification. Our models are publicly available at
https://ai4bharat.iitm.ac.in/indiclid under open-source licenses. Our training
and test sets are also publicly available at
https://ai4bharat.iitm.ac.in/bhasha-abhijnaanam under open-source licenses.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 07:53:23 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jun 2023 11:39:03 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Madhani",
"Yash",
""
],
[
"Khapra",
"Mitesh M.",
""
],
[
"Kunchukuttan",
"Anoop",
""
]
] |
new_dataset
| 0.999912 |
2305.16133
|
Cheng Zhang
|
Zhiyu Tan, Zichao Dong, Cheng Zhang, Weikun Zhang, Hang Ji, Hao Li
|
OVO: Open-Vocabulary Occupancy
| null | null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Semantic occupancy prediction aims to infer dense geometry and semantics of
surroundings for an autonomous agent to operate safely in the 3D environment.
Existing occupancy prediction methods are almost entirely trained on
human-annotated volumetric data. Although of high quality, the generation of
such 3D annotations is laborious and costly, restricting them to a few specific
object categories in the training dataset. To address this limitation, this
paper proposes Open Vocabulary Occupancy (OVO), a novel approach that allows
semantic occupancy prediction of arbitrary classes but without the need for 3D
annotations during training. Keys to our approach are (1) knowledge
distillation from a pre-trained 2D open-vocabulary segmentation model to the 3D
occupancy network, and (2) pixel-voxel filtering for high-quality training data
generation. The resulting framework is simple, compact, and compatible with
most state-of-the-art semantic occupancy prediction models. On NYUv2 and
SemanticKITTI datasets, OVO achieves competitive performance compared to
supervised semantic occupancy prediction approaches. Furthermore, we conduct
extensive analyses and ablation studies to offer insights into the design of
the proposed framework. Our code is publicly available at
https://github.com/dzcgaara/OVO.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 15:07:25 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jun 2023 17:30:54 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Tan",
"Zhiyu",
""
],
[
"Dong",
"Zichao",
""
],
[
"Zhang",
"Cheng",
""
],
[
"Zhang",
"Weikun",
""
],
[
"Ji",
"Hang",
""
],
[
"Li",
"Hao",
""
]
] |
new_dataset
| 0.997639 |
2305.18897
|
Lucas Mourot
|
Lucas Mourot, Ludovic Hoyet, Fran\c{c}ois Le Clerc and Pierre Hellier
|
HuMoT: Human Motion Representation using Topology-Agnostic Transformers
for Character Animation Retargeting
|
17 pages, 12 figures, 5 tables
| null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motion retargeting is the long-standing problem in character animation that
consists in transferring and adapting the motion of a source character to
another target character. A typical application is the creation of motion
sequences from off-the-shelf motions by transferring them onto new characters.
Motion retargeting is also promising to increase interoperability of existing
animation systems and motion databases, as they often differ in the structure
of the skeleton(s) considered. Moreover, since the goal of motion retargeting
is to abstract and transfer motion dynamics, effective solutions might provide
expressive and powerful human motion models in which operations such as
cleaning or editing are easier. In this article, we present a novel neural
network architecture for retargeting that extracts an abstract representation
of human motion agnostic to skeleton topology and morphology. Based on
transformers, our model is able to encode and decode motion sequences with
variable morphology and topology -- extending the current scope of retargeting
-- while supporting skeleton topologies not seen during the training phase.
More specifically, our model is structured as an autoencoder, and encoding and
decoding are separately conditioned on skeleton templates to extract and
control morphology and topology. Beyond motion retargeting, our model has many
applications since our abstract representation is a convenient space to embed
motion data from different sources. It may potentially be benefical to a number
of data-driven methods, allowing them to combine scarce specialised motion
datasets (e.g. with style or contact annotations) and larger general motion
datasets, for improved performance and generalisation ability. Moreover, we
show that our model can be useful for other applications beyond retargeting,
including motion denoising and joint upsampling.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 09:52:33 GMT"
},
{
"version": "v2",
"created": "Wed, 31 May 2023 13:36:26 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Jun 2023 08:33:57 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Mourot",
"Lucas",
""
],
[
"Hoyet",
"Ludovic",
""
],
[
"Clerc",
"François Le",
""
],
[
"Hellier",
"Pierre",
""
]
] |
new_dataset
| 0.973798 |
2306.01105
|
Sarah Masud
|
Atharva Kulkarni, Sarah Masud, Vikram Goyal, Tanmoy Chakraborty
|
Revisiting Hate Speech Benchmarks: From Data Curation to System
Deployment
|
15 pages, 4 figures, 11 tables. Accepted at SIGKDD'23
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social media is awash with hateful content, much of which is often veiled
with linguistic and topical diversity. The benchmark datasets used for hate
speech detection do not account for such divagation as they are predominantly
compiled using hate lexicons. However, capturing hate signals becomes
challenging in neutrally-seeded malicious content. Thus, designing models and
datasets that mimic the real-world variability of hate warrants further
investigation.
To this end, we present GOTHate, a large-scale code-mixed crowdsourced
dataset of around 51k posts for hate speech detection from Twitter. GOTHate is
neutrally seeded, encompassing different languages and topics. We conduct
detailed comparisons of GOTHate with the existing hate speech datasets,
highlighting its novelty. We benchmark it with 10 recent baselines. Our
extensive empirical and benchmarking experiments suggest that GOTHate is hard
to classify in a text-only setup. Thus, we investigate how adding endogenous
signals enhances the hate speech detection task. We augment GOTHate with the
user's timeline information and ego network, bringing the overall data source
closer to the real-world setup for understanding hateful content. Our proposed
solution HEN-mBERT is a modular, multilingual, mixture-of-experts model that
enriches the linguistic subspace with latent endogenous signals from history,
topology, and exemplars. HEN-mBERT transcends the best baseline by 2.5% and 5%
in overall macro-F1 and hate class F1, respectively. Inspired by our
experiments, in partnership with Wipro AI, we are developing a semi-automated
pipeline to detect hateful content as a part of their mission to tackle online
harm.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 19:36:52 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2023 12:37:34 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Kulkarni",
"Atharva",
""
],
[
"Masud",
"Sarah",
""
],
[
"Goyal",
"Vikram",
""
],
[
"Chakraborty",
"Tanmoy",
""
]
] |
new_dataset
| 0.97587 |
2306.04428
|
Claytone Sikasote
|
Claytone Sikasote, Kalinda Siaminwe, Stanly Mwape, Bangiwe Zulu, Mofya
Phiri, Martin Phiri, David Zulu, Mayumbo Nyirenda, Antonios Anastasopoulos
|
Zambezi Voice: A Multilingual Speech Corpus for Zambian Languages
|
Accepted at INTERSPEECH 2023. This pre-print version differs slightly
from the version accepted to INTERSPEECH 2023: Figure 1 is not included in
INTERSPEECH 2023!
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This work introduces Zambezi Voice, an open-source multilingual speech
resource for Zambian languages. It contains two collections of datasets:
unlabelled audio recordings of radio news and talk shows programs (160 hours)
and labelled data (over 80 hours) consisting of read speech recorded from text
sourced from publicly available literature books. The dataset is created for
speech recognition but can be extended to multilingual speech processing
research for both supervised and unsupervised learning approaches. To our
knowledge, this is the first multilingual speech dataset created for Zambian
languages. We exploit pretraining and cross-lingual transfer learning by
finetuning the Wav2Vec2.0 large-scale multilingual pre-trained model to build
end-to-end (E2E) speech recognition models for our baseline models. The dataset
is released publicly under a Creative Commons BY-NC-ND 4.0 license and can be
accessed via https://github.com/unza-speech-lab/zambezi-voice .
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 13:36:37 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2023 20:48:02 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Sikasote",
"Claytone",
""
],
[
"Siaminwe",
"Kalinda",
""
],
[
"Mwape",
"Stanly",
""
],
[
"Zulu",
"Bangiwe",
""
],
[
"Phiri",
"Mofya",
""
],
[
"Phiri",
"Martin",
""
],
[
"Zulu",
"David",
""
],
[
"Nyirenda",
"Mayumbo",
""
],
[
"Anastasopoulos",
"Antonios",
""
]
] |
new_dataset
| 0.999731 |
2306.06070
|
Xiang Deng
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi
Wang, Huan Sun, Yu Su
|
Mind2Web: Towards a Generalist Agent for the Web
|
Website: https://osu-nlp-group.github.io/Mind2Web Updated with
supplementary material
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 17:44:31 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2023 03:50:30 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Deng",
"Xiang",
""
],
[
"Gu",
"Yu",
""
],
[
"Zheng",
"Boyuan",
""
],
[
"Chen",
"Shijie",
""
],
[
"Stevens",
"Samuel",
""
],
[
"Wang",
"Boshi",
""
],
[
"Sun",
"Huan",
""
],
[
"Su",
"Yu",
""
]
] |
new_dataset
| 0.999882 |
2306.06300
|
Ziyang Yan
|
Ali Karami, Simone Rigon, Gabriele Mazzacca, Ziyang Yan, Fabio
Remondino
|
NERFBK: A High-Quality Benchmark for NERF-Based 3D Reconstruction
|
paper result has problem
| null | null | null |
cs.CV cs.AI cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a new real and synthetic dataset called NeRFBK
specifically designed for testing and comparing NeRF-based 3D reconstruction
algorithms. High-quality 3D reconstruction has significant potential in various
fields, and advancements in image-based algorithms make it essential to
evaluate new advanced techniques. However, gathering diverse data with precise
ground truth is challenging and may not encompass all relevant applications.
The NeRFBK dataset addresses this issue by providing multi-scale, indoor and
outdoor datasets with high-resolution images and videos and camera parameters
for testing and comparing NeRF-based algorithms. This paper presents the design
and creation of the NeRFBK benchmark, various examples and application
scenarios, and highlights its potential for advancing the field of 3D
reconstruction.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 23:28:33 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2023 10:51:34 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Karami",
"Ali",
""
],
[
"Rigon",
"Simone",
""
],
[
"Mazzacca",
"Gabriele",
""
],
[
"Yan",
"Ziyang",
""
],
[
"Remondino",
"Fabio",
""
]
] |
new_dataset
| 0.999902 |
2306.06924
|
Andrew Critch PhD
|
Andrew Critch and Stuart Russell
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
| null | null | null | null |
cs.AI cs.CR cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 07:55:18 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jun 2023 18:55:50 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Critch",
"Andrew",
""
],
[
"Russell",
"Stuart",
""
]
] |
new_dataset
| 0.999716 |
2306.07220
|
Shervin Rasoulzadeh
|
S. Rasoulzadeh, M. Wimmer, and I. Kovacic
|
Strokes2Surface: Recovering Curve Networks From 4D Architectural Design
Sketches
|
14 pages, 16 figures
| null | null | null |
cs.GR cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present Strokes2Surface, an offline geometry-reconstruction pipeline built
upon a 4D Sketching Interface, MR.Sketch, targeted at architectural design. The
pipeline recovers a curve network from designer-drawn strokes, thus bridging
between concept design and digital modeling stages in architectural design. The
input to our pipeline consists of 3D strokes' polyline vertices and their
corresponding timestamps (as of the fourth dimension), along with additional
geometric and stylus-related recorded properties. Inspired by sketch
consolidation and sketch-based modeling methods, our pipeline leverages such
data and combines three Machine Learning (ML) models; a classifier and two
clustering models. In particular, based on observations of practices designers
typically employ in architectural design sketches, we solve a binary
classification problem to recognize whether a stroke depicts a boundary and
edge or is used to fill in the enclosing areas and faces of the intended
architectural object. Followed by the two clustering models, strokes of each
type are further parsed into groups, each representing either a single edge or
a single face. Next, groups representing edges are approximated with B-spline
curves, followed by a topology-recovering process identifying and fixing
desired connectivities between the curves forming a well-connected curve
network. Next, groups representing the faces are employed to detect the cycles
bounding patches in the curve network, resulting in the final surface mesh
geometry of the architectural object. We confirm the usability of
Strokes2Surface via a user study and further validate and compare our results
against a range of reconstructions computed using alternative methods. We also
introduce our manually labeled dataset of 4D architectural design sketches for
further use in the community.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 16:26:38 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2023 15:40:46 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Rasoulzadeh",
"S.",
""
],
[
"Wimmer",
"M.",
""
],
[
"Kovacic",
"I.",
""
]
] |
new_dataset
| 0.989476 |
2306.07695
|
Evangelos Bitsikas
|
Evangelos Bitsikas, Theodor Schnitzler, Christina P\"opper, Aanjhan
Ranganathan
|
Freaky Leaky SMS: Extracting User Locations by Analyzing SMS Timings
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Short Message Service (SMS) remains one of the most popular communication
channels since its introduction in 2G cellular networks. In this paper, we
demonstrate that merely receiving silent SMS messages regularly opens a
stealthy side-channel that allows other regular network users to infer the
whereabouts of the SMS recipient. The core idea is that receiving an SMS
inevitably generates Delivery Reports whose reception bestows a timing attack
vector at the sender. We conducted experiments across various countries,
operators, and devices to show that an attacker can deduce the location of an
SMS recipient by analyzing timing measurements from typical receiver locations.
Our results show that, after training an ML model, the SMS sender can
accurately determine multiple locations of the recipient. For example, our
model achieves up to 96% accuracy for locations across different countries, and
86% for two locations within Belgium. Due to the way cellular networks are
designed, it is difficult to prevent Delivery Reports from being returned to
the originator making it challenging to thwart this covert attack without
making fundamental changes to the network architecture.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 11:20:18 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jun 2023 08:36:18 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Bitsikas",
"Evangelos",
""
],
[
"Schnitzler",
"Theodor",
""
],
[
"Pöpper",
"Christina",
""
],
[
"Ranganathan",
"Aanjhan",
""
]
] |
new_dataset
| 0.99431 |
2306.07974
|
Cuneyt Gurcan Akcora
|
Poupak Azad, Baris Coskunuzer, Murat Kantarcioglu, Cuneyt Gurcan
Akcora
|
Chainlet Orbits: Topological Address Embedding for the Bitcoin
Blockchain
| null | null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The rise of cryptocurrencies like Bitcoin, which enable transactions with a
degree of pseudonymity, has led to a surge in various illicit activities,
including ransomware payments and transactions on darknet markets. These
illegal activities often utilize Bitcoin as the preferred payment method.
However, current tools for detecting illicit behavior either rely on a few
heuristics and laborious data collection processes or employ computationally
inefficient graph neural network (GNN) models that are challenging to
interpret.
To overcome the computational and interpretability limitations of existing
techniques, we introduce an effective solution called Chainlet Orbits. This
approach embeds Bitcoin addresses by leveraging their topological
characteristics in transactions. By employing our innovative address embedding,
we investigate e-crime in Bitcoin networks by focusing on distinctive
substructures that arise from illicit behavior.
The results of our node classification experiments demonstrate superior
performance compared to state-of-the-art methods, including both topological
and GNN-based approaches. Moreover, our approach enables the use of
interpretable and explainable machine learning models in as little as 15
minutes for most days on the Bitcoin transaction network.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 21:16:59 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Azad",
"Poupak",
""
],
[
"Coskunuzer",
"Baris",
""
],
[
"Kantarcioglu",
"Murat",
""
],
[
"Akcora",
"Cuneyt Gurcan",
""
]
] |
new_dataset
| 0.99824 |
2306.08004
|
Edgar Hernando Sepulveda Oviedo
|
Edgar Hernando Sep\'ulveda Oviedo (LAAS-DISCO, LAAS-ISGE), Louise
Trav\'e-Massuy\`es, Audine Subias, Marko Pavlov, Corinne Alonso
|
Detection and classification of faults aimed at preventive maintenance
of PV systems
| null |
XI Congreso Internacional de Ingenier{\'i}a Mec{\'a}nica,
Mecatr{\'o}nica y Automatizaci{\'o}n 2023, Universidad Nacional de Colombia,
Apr 2023, Carthag{\`e}ne, Colombia
| null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diagnosis in PV systems aims to detect, locate and identify faults.
Diagnosing these faults is vital to guarantee energy production and extend the
useful life of PV power plants. In the literature, multiple machine learning
approaches have been proposed for this purpose. However, few of these works
have paid special attention to the detection of fine faults and the specialized
process of extraction and selection of features for their classification. A
fine fault is one whose characteristic signature is difficult to distinguish to
that of a healthy panel. As a contribution to the detection of fine faults
(especially of the snail trail type), this article proposes an innovative
approach based on the Random Forest (RF) algorithm. This approach uses a
complex feature extraction and selection method that improves the computational
time of fault classification while maintaining high accuracy.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 07:44:47 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Oviedo",
"Edgar Hernando Sepúlveda",
"",
"LAAS-DISCO, LAAS-ISGE"
],
[
"Travé-Massuyès",
"Louise",
""
],
[
"Subias",
"Audine",
""
],
[
"Pavlov",
"Marko",
""
],
[
"Alonso",
"Corinne",
""
]
] |
new_dataset
| 0.987976 |
2306.08020
|
Susan Leavy Dr
|
Susan Leavy, Gerardine Meaney, Karen Wade and Derek Greene
|
Curatr: A Platform for Semantic Analysis and Curation of Historical
Literary Texts
|
12 pages
|
Metadata and Semantic Research (MTSR 2019), Communications in
Computer and Information Science, vol 1057. Springer, Cham
|
10.1007/978-3-030-36599-8_31
| null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The increasing availability of digital collections of historical and
contemporary literature presents a wealth of possibilities for new research in
the humanities. The scale and diversity of such collections however, presents
particular challenges in identifying and extracting relevant content. This
paper presents Curatr, an online platform for the exploration and curation of
literature with machine learning-supported semantic search, designed within the
context of digital humanities scholarship. The platform provides a text mining
workflow that combines neural word embeddings with expert domain knowledge to
enable the generation of thematic lexicons, allowing researches to curate
relevant sub-corpora from a large corpus of 18th and 19th century digitised
texts.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 15:15:31 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Leavy",
"Susan",
""
],
[
"Meaney",
"Gerardine",
""
],
[
"Wade",
"Karen",
""
],
[
"Greene",
"Derek",
""
]
] |
new_dataset
| 0.979945 |
2306.08126
|
Xu Han
|
Xu Han, Bin Guo, Yoon Jung, Benjamin Yao, Yu Zhang, Xiaohu Liu,
Chenlei Guo
|
PersonaPKT: Building Personalized Dialogue Agents via
Parameter-efficient Knowledge Transfer
|
10 pages, 3 figures, accepted to SustaiNLP 2023
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Personalized dialogue agents (DAs) powered by large pre-trained language
models (PLMs) often rely on explicit persona descriptions to maintain
personality consistency. However, such descriptions may not always be available
or may pose privacy concerns. To tackle this bottleneck, we introduce
PersonaPKT, a lightweight transfer learning approach that can build
persona-consistent dialogue models without explicit persona descriptions. By
representing each persona as a continuous vector, PersonaPKT learns implicit
persona-specific features directly from a small number of dialogue samples
produced by the same persona, adding less than 0.1% trainable parameters for
each persona on top of the PLM backbone. Empirical results demonstrate that
PersonaPKT effectively builds personalized DAs with high storage efficiency,
outperforming various baselines in terms of persona consistency while
maintaining good response generation quality. In addition, it enhances privacy
protection by avoiding explicit persona descriptions. Overall, PersonaPKT is an
effective solution for creating personalized DAs that respect user privacy.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 20:47:29 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Han",
"Xu",
""
],
[
"Guo",
"Bin",
""
],
[
"Jung",
"Yoon",
""
],
[
"Yao",
"Benjamin",
""
],
[
"Zhang",
"Yu",
""
],
[
"Liu",
"Xiaohu",
""
],
[
"Guo",
"Chenlei",
""
]
] |
new_dataset
| 0.997318 |
2306.08127
|
Merve G\"ulmez
|
Merve G\"ulmez, Thomas Nyman, Christoph Baumann, Jan Tobias M\"uhlberg
|
Friend or Foe Inside? Exploring In-Process Isolation to Maintain Memory
Safety for Unsafe Rust
| null | null | null | null |
cs.CR cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rust is a popular memory-safe systems programming language. In order to
interact with hardware or call into non-Rust libraries, Rust provides
\emph{unsafe} language features that shift responsibility for ensuring memory
safety to the developer. Failing to do so, may lead to memory safety violations
in unsafe code which can violate safety of the entire application. In this work
we explore in-process isolation with Memory Protection Keys as a mechanism to
shield safe program sections from safety violations that may happen in unsafe
sections. Our approach is easy to use and comprehensive as it prevents heap and
stack-based violations. We further compare process-based and in-process
isolation mechanisms and the necessary requirements for data serialization,
communication, and context switching. Our results show that in-process
isolation can be effective and efficient, permits for a high degree of
automation, and also enables a notion of application rewinding where the safe
program section may detect and safely handle violations in unsafe code.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 20:48:13 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Gülmez",
"Merve",
""
],
[
"Nyman",
"Thomas",
""
],
[
"Baumann",
"Christoph",
""
],
[
"Mühlberg",
"Jan Tobias",
""
]
] |
new_dataset
| 0.997288 |
2306.08132
|
Dylan Turpin
|
Dylan Turpin, Tao Zhong, Shutong Zhang, Guanglei Zhu, Jingzhou Liu,
Ritvik Singh, Eric Heiden, Miles Macklin, Stavros Tsogkas, Sven Dickinson,
Animesh Garg
|
Fast-Grasp'D: Dexterous Multi-finger Grasp Generation Through
Differentiable Simulation
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-finger grasping relies on high quality training data, which is hard to
obtain: human data is hard to transfer and synthetic data relies on simplifying
assumptions that reduce grasp quality. By making grasp simulation
differentiable, and contact dynamics amenable to gradient-based optimization,
we accelerate the search for high-quality grasps with fewer limiting
assumptions. We present Grasp'D-1M: a large-scale dataset for multi-finger
robotic grasping, synthesized with Fast- Grasp'D, a novel differentiable
grasping simulator. Grasp'D- 1M contains one million training examples for
three robotic hands (three, four and five-fingered), each with multimodal
visual inputs (RGB+depth+segmentation, available in mono and stereo). Grasp
synthesis with Fast-Grasp'D is 10x faster than GraspIt! and 20x faster than the
prior Grasp'D differentiable simulator. Generated grasps are more stable and
contact-rich than GraspIt! grasps, regardless of the distance threshold used
for contact generation. We validate the usefulness of our dataset by retraining
an existing vision-based grasping pipeline on Grasp'D-1M, and showing a
dramatic increase in model performance, predicting grasps with 30% more
contact, a 33% higher epsilon metric, and 35% lower simulated displacement.
Additional details at https://dexgrasp.github.io.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 20:54:07 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Turpin",
"Dylan",
""
],
[
"Zhong",
"Tao",
""
],
[
"Zhang",
"Shutong",
""
],
[
"Zhu",
"Guanglei",
""
],
[
"Liu",
"Jingzhou",
""
],
[
"Singh",
"Ritvik",
""
],
[
"Heiden",
"Eric",
""
],
[
"Macklin",
"Miles",
""
],
[
"Tsogkas",
"Stavros",
""
],
[
"Dickinson",
"Sven",
""
],
[
"Garg",
"Animesh",
""
]
] |
new_dataset
| 0.96448 |
2306.08141
|
Kailas Vodrahalli
|
Kailas Vodrahalli and James Zou
|
ArtWhisperer: A Dataset for Characterizing Human-AI Interactions in
Artistic Creations
|
20 pages, 13 figures
| null | null | null |
cs.AI cs.CV cs.HC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As generative AI becomes more prevalent, it is important to study how human
users interact with such models. In this work, we investigate how people use
text-to-image models to generate desired target images. To study this
interaction, we created ArtWhisperer, an online game where users are given a
target image and are tasked with iteratively finding a prompt that creates a
similar-looking image as the target. Through this game, we recorded over 50,000
human-AI interactions; each interaction corresponds to one text prompt created
by a user and the corresponding generated image. The majority of these are
repeated interactions where a user iterates to find the best prompt for their
target image, making this a unique sequential dataset for studying human-AI
collaborations. In an initial analysis of this dataset, we identify several
characteristics of prompt interactions and user strategies. People submit
diverse prompts and are able to discover a variety of text descriptions that
generate similar images. Interestingly, prompt diversity does not decrease as
users find better prompts. We further propose to a new metric the study the
steerability of AI using our dataset. We define steerability as the expected
number of interactions required to adequately complete a task. We estimate this
value by fitting a Markov chain for each target task and calculating the
expected time to reach an adequate score in the Markov chain. We quantify and
compare AI steerability across different types of target images and two
different models, finding that images of cities and natural world images are
more steerable than artistic and fantasy images. These findings provide
insights into human-AI interaction behavior, present a concrete method of
assessing AI steerability, and demonstrate the general utility of the
ArtWhisperer dataset.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 21:10:45 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Vodrahalli",
"Kailas",
""
],
[
"Zou",
"James",
""
]
] |
new_dataset
| 0.99872 |
2306.08144
|
Piergiuseppe Mallozzi
|
Piergiuseppe Mallozzi, Nir Piterman, Pierluigi Nuzzo, Gerardo
Schneider, Patrizio Pelliccione
|
Correct-by-Construction Design of Contextual Robotic Missions Using
Contracts
| null | null | null | null |
cs.SE cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Effectively specifying and implementing robotic missions pose a set of
challenges to software engineering for robotic systems, since they require
formalizing and executing a robot's high-level tasks while considering various
application scenarios and conditions, also known as contexts, in real-world
operational environments.
Writing correct mission specifications that explicitly account for multiple
contexts can be a tedious and error-prone task. Moreover, as the number of
context, hence the specification, becomes more complex, generating a
correct-by-construction implementation, e.g., by using synthesis methods, can
become intractable. A viable approach to address these issues is to decompose
the mission specification into smaller sub-missions, with each sub-mission
corresponding to a specific context. However, such a compositional approach
would still pose challenges in ensuring the overall mission correctness.
In this paper, we propose a new, compositional framework for the
specification and implementation of contextual robotic missions using
assume-guarantee contracts. The mission specification is captured in a
hierarchical and modular way and each sub-mission is synthesized as a robot
controller. We address the problem of dynamically switching between sub-mission
controllers while ensuring correctness under certain conditions.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 21:29:17 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Mallozzi",
"Piergiuseppe",
""
],
[
"Piterman",
"Nir",
""
],
[
"Nuzzo",
"Pierluigi",
""
],
[
"Schneider",
"Gerardo",
""
],
[
"Pelliccione",
"Patrizio",
""
]
] |
new_dataset
| 0.997043 |
2306.08171
|
Mark Quinlan
|
Mark Quinlan, Aaron Cross, Andrew Simpson
|
The aesthetics of cyber security: How do users perceive them?
|
Submitted to Philosophy ant Technology Journal in late 2022
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
While specific aesthetic philosophies may differ across cultures, all human
societies have used aesthetics to support communication and learning. Within
the fields of usability and usable security, aesthetics have been deployed for
such diverse purposes as enhancing students' e-learning experiences and
optimising user interface design. In this paper, we seek to understand how
individual users perceive the visual assets that accompany cyber security
information, and how these visual assets and user perceptions underwrite a
distinct \emph{cyber security aesthetic}. We ask, (1) What constitutes cyber
security aesthetics, from the perspective of an individual user? and (2) How
might these aesthetics affect users' perceived self-efficacy as they informally
learn cyber security precepts? To begin answering these questions, we compile
an image-set from cyber security web articles and analyse the distinct visual
properties and sentiments of these images.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 23:19:47 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Quinlan",
"Mark",
""
],
[
"Cross",
"Aaron",
""
],
[
"Simpson",
"Andrew",
""
]
] |
new_dataset
| 0.95338 |
2306.08188
|
Sanat Sharma
|
Sanat Sharma, Jayant Kumar, Jing Zheng, Tracy Holloway King
|
Contextual Font Recommendations based on User Intent
|
In Proceedings of ACM SIGIR Workshop on eCommerce (SIGIR eCom'23)
| null | null | null |
cs.HC cs.IR cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Adobe Fonts has a rich library of over 20,000 unique fonts that Adobe users
utilize for creating graphics, posters, composites etc. Due to the nature of
the large library, knowing what font to select can be a daunting task that
requires a lot of experience. For most users in Adobe products, especially
casual users of Adobe Express, this often means choosing the default font
instead of utilizing the rich and diverse fonts available. In this work, we
create an intent-driven system to provide contextual font recommendations to
users to aid in their creative journey. Our system takes in multilingual text
input and recommends suitable fonts based on the user's intent. Based on user
entitlements, the mix of free and paid fonts is adjusted. The feature is
currently used by millions of Adobe Express users with a CTR of >25%.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 01:15:55 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Sharma",
"Sanat",
""
],
[
"Kumar",
"Jayant",
""
],
[
"Zheng",
"Jing",
""
],
[
"King",
"Tracy Holloway",
""
]
] |
new_dataset
| 0.984186 |
2306.08196
|
Rukshani Somarathna
|
Rukshani Somarathna, Patrik Vuilleumier and Gelareh Mohammadi
|
EmoStim: A Database of Emotional Film Clips with Discrete and
Componential Assessment
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Emotion elicitation using emotional film clips is one of the most common and
ecologically valid methods in Affective Computing. However, selecting and
validating appropriate materials that evoke a range of emotions is challenging.
Here we present EmoStim: A Database of Emotional Film Clips as a film library
with a rich and varied content. EmoStim is designed for researchers interested
in studying emotions in relation to either discrete or componential models of
emotion. To create the database, 139 film clips were selected from literature
and then annotated by 638 participants through the CrowdFlower platform. We
selected 99 film clips based on the distribution of subjective ratings that
effectively distinguished between emotions defined by the discrete model. We
show that the selected film clips reliably induce a range of specific emotions
according to the discrete model. Further, we describe relationships between
emotions, emotion organization in the componential space, and underlying
dimensions representing emotional experience. The EmoStim database and
participant annotations are freely available for research purposes. The
database can be used to enrich our understanding of emotions further and serve
as a guide to select or create additional materials.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 01:47:59 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Somarathna",
"Rukshani",
""
],
[
"Vuilleumier",
"Patrik",
""
],
[
"Mohammadi",
"Gelareh",
""
]
] |
new_dataset
| 0.999538 |
2306.08226
|
Jingyu Hu
|
Jingyu Hu, Ka-Hei Hui, Zhengzhe liu, Hao Zhang and Chi-Wing Fu
|
CLIPXPlore: Coupled CLIP and Shape Spaces for 3D Shape Exploration
| null | null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents CLIPXPlore, a new framework that leverages a
vision-language model to guide the exploration of the 3D shape space. Many
recent methods have been developed to encode 3D shapes into a learned latent
shape space to enable generative design and modeling. Yet, existing methods
lack effective exploration mechanisms, despite the rich information. To this
end, we propose to leverage CLIP, a powerful pre-trained vision-language model,
to aid the shape-space exploration. Our idea is threefold. First, we couple the
CLIP and shape spaces by generating paired CLIP and shape codes through sketch
images and training a mapper network to connect the two spaces. Second, to
explore the space around a given shape, we formulate a co-optimization strategy
to search for the CLIP code that better matches the geometry of the shape.
Third, we design three exploration modes, binary-attribute-guided, text-guided,
and sketch-guided, to locate suitable exploration trajectories in shape space
and induce meaningful changes to the shape. We perform a series of experiments
to quantitatively and visually compare CLIPXPlore with different baselines in
each of the three exploration modes, showing that CLIPXPlore can produce many
meaningful exploration results that cannot be achieved by the existing
solutions.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 03:39:32 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Hu",
"Jingyu",
""
],
[
"Hui",
"Ka-Hei",
""
],
[
"liu",
"Zhengzhe",
""
],
[
"Zhang",
"Hao",
""
],
[
"Fu",
"Chi-Wing",
""
]
] |
new_dataset
| 0.993019 |
2306.08251
|
Jieren Deng
|
Jieren Deng, Xin Zhou, Hao Tian, Zhihong Pan, and Derek Aguiar
|
GBSD: Generative Bokeh with Stage Diffusion
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The bokeh effect is an artistic technique that blurs out-of-focus areas in a
photograph and has gained interest due to recent developments in text-to-image
synthesis and the ubiquity of smart-phone cameras and photo-sharing apps. Prior
work on rendering bokeh effects have focused on post hoc image manipulation to
produce similar blurring effects in existing photographs using classical
computer graphics or neural rendering techniques, but have either depth
discontinuity artifacts or are restricted to reproducing bokeh effects that are
present in the training data. More recent diffusion based models can synthesize
images with an artistic style, but either require the generation of
high-dimensional masks, expensive fine-tuning, or affect global image
characteristics. In this paper, we present GBSD, the first generative
text-to-image model that synthesizes photorealistic images with a bokeh style.
Motivated by how image synthesis occurs progressively in diffusion models, our
approach combines latent diffusion models with a 2-stage conditioning algorithm
to render bokeh effects on semantically defined objects. Since we can focus the
effect on objects, this semantic bokeh effect is more versatile than classical
rendering techniques. We evaluate GBSD both quantitatively and qualitatively
and demonstrate its ability to be applied in both text-to-image and
image-to-image settings.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 05:34:02 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Deng",
"Jieren",
""
],
[
"Zhou",
"Xin",
""
],
[
"Tian",
"Hao",
""
],
[
"Pan",
"Zhihong",
""
],
[
"Aguiar",
"Derek",
""
]
] |
new_dataset
| 0.999342 |
2306.08259
|
Xu Liu
|
Xu Liu, Yutong Xia, Yuxuan Liang, Junfeng Hu, Yiwei Wang, Lei Bai,
Chao Huang, Zhenguang Liu, Bryan Hooi, Roger Zimmermann
|
LargeST: A Benchmark Dataset for Large-Scale Traffic Forecasting
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traffic forecasting plays a critical role in smart city initiatives and has
experienced significant advancements thanks to the power of deep learning in
capturing non-linear patterns of traffic data. However, the promising results
achieved on current public datasets may not be applicable to practical
scenarios due to limitations within these datasets. First, the limited sizes of
them may not reflect the real-world scale of traffic networks. Second, the
temporal coverage of these datasets is typically short, posing hurdles in
studying long-term patterns and acquiring sufficient samples for training deep
models. Third, these datasets often lack adequate metadata for sensors, which
compromises the reliability and interpretability of the data. To mitigate these
limitations, we introduce the LargeST benchmark dataset. It encompasses a total
number of 8,600 sensors with a 5-year time coverage and includes comprehensive
metadata. Using LargeST, we perform in-depth data analysis to extract data
insights, benchmark well-known baselines in terms of their performance and
efficiency, and identify challenges as well as opportunities for future
research. We release the datasets and baseline implementations at:
https://github.com/liuxu77/LargeST.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 05:48:36 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Liu",
"Xu",
""
],
[
"Xia",
"Yutong",
""
],
[
"Liang",
"Yuxuan",
""
],
[
"Hu",
"Junfeng",
""
],
[
"Wang",
"Yiwei",
""
],
[
"Bai",
"Lei",
""
],
[
"Huang",
"Chao",
""
],
[
"Liu",
"Zhenguang",
""
],
[
"Hooi",
"Bryan",
""
],
[
"Zimmermann",
"Roger",
""
]
] |
new_dataset
| 0.999787 |
2306.08276
|
Luyang Zhu
|
Luyang Zhu, Dawei Yang, Tyler Zhu, Fitsum Reda, William Chan, Chitwan
Saharia, Mohammad Norouzi, Ira Kemelmacher-Shlizerman
|
TryOnDiffusion: A Tale of Two UNets
|
CVPR 2023. Project page: https://tryondiffusion.github.io/
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Given two images depicting a person and a garment worn by another person, our
goal is to generate a visualization of how the garment might look on the input
person. A key challenge is to synthesize a photorealistic detail-preserving
visualization of the garment, while warping the garment to accommodate a
significant body pose and shape change across the subjects. Previous methods
either focus on garment detail preservation without effective pose and shape
variation, or allow try-on with the desired shape and pose but lack garment
details. In this paper, we propose a diffusion-based architecture that unifies
two UNets (referred to as Parallel-UNet), which allows us to preserve garment
details and warp the garment for significant pose and body change in a single
network. The key ideas behind Parallel-UNet include: 1) garment is warped
implicitly via a cross attention mechanism, 2) garment warp and person blend
happen as part of a unified process as opposed to a sequence of two separate
tasks. Experimental results indicate that TryOnDiffusion achieves
state-of-the-art performance both qualitatively and quantitatively.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 06:25:58 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Zhu",
"Luyang",
""
],
[
"Yang",
"Dawei",
""
],
[
"Zhu",
"Tyler",
""
],
[
"Reda",
"Fitsum",
""
],
[
"Chan",
"William",
""
],
[
"Saharia",
"Chitwan",
""
],
[
"Norouzi",
"Mohammad",
""
],
[
"Kemelmacher-Shlizerman",
"Ira",
""
]
] |
new_dataset
| 0.999379 |
2306.08277
|
Mridul Gupta
|
Mridul Gupta, Hariprasad Kodamana, Sayan Ranu
|
FRIGATE: Frugal Spatio-temporal Forecasting on Road Networks
| null |
Proceedings of the 29th ACM SIGKDD Conference on Knowledge
Discovery and Data Mining (KDD 23), 2023
|
10.1145/3580305.3599357
| null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Modelling spatio-temporal processes on road networks is a task of growing
importance. While significant progress has been made on developing
spatio-temporal graph neural networks (Gnns), existing works are built upon
three assumptions that are not practical on real-world road networks. First,
they assume sensing on every node of a road network. In reality, due to
budget-constraints or sensor failures, all locations (nodes) may not be
equipped with sensors. Second, they assume that sensing history is available at
all installed sensors. This is unrealistic as well due to sensor failures, loss
of packets during communication, etc. Finally, there is an assumption of static
road networks. Connectivity within networks change due to road closures,
constructions of new roads, etc. In this work, we develop FRIGATE to address
all these shortcomings. FRIGATE is powered by a spatio-temporal Gnn that
integrates positional, topological, and temporal information into rich
inductive node representations. The joint fusion of this diverse information is
made feasible through a novel combination of gated Lipschitz embeddings with
Lstms. We prove that the proposed Gnn architecture is provably more expressive
than message-passing Gnns used in state-of-the-art algorithms. The higher
expressivity of FRIGATE naturally translates to superior empirical performance
conducted on real-world network-constrained traffic data. In addition, FRIGATE
is robust to frugal sensor deployment, changes in road network connectivity,
and temporal irregularity in sensing.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 06:28:26 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Gupta",
"Mridul",
""
],
[
"Kodamana",
"Hariprasad",
""
],
[
"Ranu",
"Sayan",
""
]
] |
new_dataset
| 0.999213 |
2306.08374
|
Takanori Ashihara
|
Takanori Ashihara, Takafumi Moriya, Kohei Matsuura, Tomohiro Tanaka,
Yusuke Ijima, Taichi Asami, Marc Delcroix, Yukinori Honma
|
SpeechGLUE: How Well Can Self-Supervised Speech Models Capture
Linguistic Knowledge?
|
Accepted at INTERSPEECH 2023
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-supervised learning (SSL) for speech representation has been
successfully applied in various downstream tasks, such as speech and speaker
recognition. More recently, speech SSL models have also been shown to be
beneficial in advancing spoken language understanding tasks, implying that the
SSL models have the potential to learn not only acoustic but also linguistic
information. In this paper, we aim to clarify if speech SSL techniques can well
capture linguistic knowledge. For this purpose, we introduce SpeechGLUE, a
speech version of the General Language Understanding Evaluation (GLUE)
benchmark. Since GLUE comprises a variety of natural language understanding
tasks, SpeechGLUE can elucidate the degree of linguistic ability of speech SSL
models. Experiments demonstrate that speech SSL models, although inferior to
text-based SSL models, perform better than baselines, suggesting that they can
acquire a certain amount of general linguistic knowledge from just unlabeled
speech data.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 09:04:29 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Ashihara",
"Takanori",
""
],
[
"Moriya",
"Takafumi",
""
],
[
"Matsuura",
"Kohei",
""
],
[
"Tanaka",
"Tomohiro",
""
],
[
"Ijima",
"Yusuke",
""
],
[
"Asami",
"Taichi",
""
],
[
"Delcroix",
"Marc",
""
],
[
"Honma",
"Yukinori",
""
]
] |
new_dataset
| 0.99147 |
2306.08401
|
Jingsheng Gao
|
Jingsheng Gao, Yixin Lian, Ziyi Zhou, Yuzhuo Fu, Baoyuan Wang
|
LiveChat: A Large-Scale Personalized Dialogue Dataset Automatically
Constructed from Live Streaming
|
ACL 2023 Main Conference
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open-domain dialogue systems have made promising progress in recent years.
While the state-of-the-art dialogue agents are built upon large-scale
text-based social media data and large pre-trained models, there is no
guarantee these agents could also perform well in fast-growing scenarios, such
as live streaming, due to the bounded transferability of pre-trained models and
biased distributions of public datasets from Reddit and Weibo, etc. To improve
the essential capability of responding and establish a benchmark in the live
open-domain scenario, we introduce the LiveChat dataset, composed of 1.33
million real-life Chinese dialogues with almost 3800 average sessions across
351 personas and fine-grained profiles for each persona. LiveChat is
automatically constructed by processing numerous live videos on the Internet
and naturally falls within the scope of multi-party conversations, where the
issues of Who says What to Whom should be considered. Therefore, we target two
critical tasks of response modeling and addressee recognition and propose
retrieval-based baselines grounded on advanced techniques. Experimental results
have validated the positive effects of leveraging persona profiles and larger
average sessions per persona. In addition, we also benchmark the
transferability of advanced generation-based models on LiveChat and pose some
future directions for current challenges.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 09:50:06 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Gao",
"Jingsheng",
""
],
[
"Lian",
"Yixin",
""
],
[
"Zhou",
"Ziyi",
""
],
[
"Fu",
"Yuzhuo",
""
],
[
"Wang",
"Baoyuan",
""
]
] |
new_dataset
| 0.999617 |
2306.08417
|
Ke Deng
|
Ke Deng, Zhiyuan He, Haohan Lin, Hao Zhang, Desheng Wang
|
A Novel Channel-Constrained Model for 6G Vehicular Networks with Traffic
Spikes
| null | null | null | null |
cs.NI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile Edge Computing (MEC) holds excellent potential in Congestion
Management (CM) of 6G vehicular networks. A reasonable schedule of MEC ensures
a more reliable and efficient CM system. Unfortunately, existing parallel and
sequential models cannot cope with scarce computing resources and constrained
channels, especially during traffic rush hour. In this paper, we propose a
channel-constrained multi-core sequential model (CCMSM) for task offloading and
resource allocation. The CCMSM incorporates a utility index that couples system
energy consumption and delay, applying Genetic Algorithm combining Sparrow
Search Algorithm (GA-SSA) in the branching optimization. Furthermore, we prove
that the system delay is the shortest with the FCFS computing strategy in the
MEC server. Simulation demonstrates that the proposed CCMSM achieves a higher
optimization level and exhibits better robustness and resilient scalability for
traffic spikes.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 10:28:03 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Deng",
"Ke",
""
],
[
"He",
"Zhiyuan",
""
],
[
"Lin",
"Haohan",
""
],
[
"Zhang",
"Hao",
""
],
[
"Wang",
"Desheng",
""
]
] |
new_dataset
| 0.980797 |
2306.08418
|
Emmanouil Papadogiannakis
|
Emmanouil Papadogiannakis, Nicolas Kourtellis, Panagiotis
Papadopoulos, Evangelos P. Markatos
|
The Devil is in the Details: Analyzing the Lucrative Ad Fraud Patterns
of the Online Ad Ecosystem
|
17 pages
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
The online advertising market has recently reached the 500 billion dollar
mark, and to accommodate the need to match a user with the highest bidder at a
fraction of a second, it has moved towards a complex automated model involving
numerous agents and middle men. Stimulated by potential revenue and the lack of
transparency, bad actors have found ways to abuse it, circumvent restrictions,
and generate substantial revenue from objectionable and even illegal content.
To make matters worse, they often receive advertisements from respectable
companies which have nothing to do with these illegal activities. Altogether,
advertiser money is funneled towards unknown entities, supporting their
objectionable operations and maintaining their existence.
In this project, we work towards understanding the extent of the problem and
shed light on how shady agents take advantage of gaps in the ad ecosystem to
monetize their operations. We study over 7 million websites and examine how
state-of-the-art standards associated with online advertising are applied. We
discover and present actual practices observed in the wild and show that
publishers are able to monetize objectionable and illegal content and generate
thousands of dollars of revenue on a monthly basis.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 10:28:07 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Papadogiannakis",
"Emmanouil",
""
],
[
"Kourtellis",
"Nicolas",
""
],
[
"Papadopoulos",
"Panagiotis",
""
],
[
"Markatos",
"Evangelos P.",
""
]
] |
new_dataset
| 0.998692 |
2306.08475
|
Laura Crosara
|
Laura Crosara, Nicola Laurenti, Leonardo Badia
|
It Is Rude to Ask a Sensor Its Age-of-Information: Status Updates
Against an Eavesdropping Node
| null | null | null | null |
cs.CR cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We consider periodical status updates between a transmitter and a legitimate
receiver, in the presence of an eavesdropper that is sometimes able to capture
pieces of information. We assume that, in the absence of such a threat, the
connection between the transmitter and the receiver is controlled by the
transmitter with the aim to minimize the age of information at the receiver's
side. However, if the presence of an eavesdropper is known, the transmitter may
further tune the generation rate of status updates to trade off the age of
information values acquired by the eavesdropper and the receiver, respectively.
To analyze this problem, we first propose a metric that combines both
objectives according to a Bergson social welfare framework, and then we solve
the problem of finding the optimal generation rate as a function of the
probability of data capture by the eavesdropper. This enables us to derive
notable and sometimes counter-intuitive conclusions, and possibly establish an
extension of the age of information framework to security aspects from a
performance evaluation perspective.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 12:42:00 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Crosara",
"Laura",
""
],
[
"Laurenti",
"Nicola",
""
],
[
"Badia",
"Leonardo",
""
]
] |
new_dataset
| 0.990536 |
2306.08502
|
Giuseppe Attanasio
|
Alkis Koudounas, Moreno La Quatra, Lorenzo Vaiani, Luca Colomba,
Giuseppe Attanasio, Eliana Pastor, Luca Cagliero, Elena Baralis
|
ITALIC: An Italian Intent Classification Dataset
|
Accepted at INTERSPEECH 2023. Data and code at
https://github.com/RiTA-nlp/ITALIC
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Recent large-scale Spoken Language Understanding datasets focus predominantly
on English and do not account for language-specific phenomena such as
particular phonemes or words in different lects. We introduce ITALIC, the first
large-scale speech dataset designed for intent classification in Italian. The
dataset comprises 16,521 crowdsourced audio samples recorded by 70 speakers
from various Italian regions and annotated with intent labels and additional
metadata. We explore the versatility of ITALIC by evaluating current
state-of-the-art speech and text models. Results on intent classification
suggest that increasing scale and running language adaptation yield better
speech models, monolingual text models outscore multilingual ones, and that
speech recognition on ITALIC is more challenging than on existing Italian
benchmarks. We release both the dataset and the annotation scheme to streamline
the development of new Italian SLU models and language-specific datasets.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 13:36:24 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Koudounas",
"Alkis",
""
],
[
"La Quatra",
"Moreno",
""
],
[
"Vaiani",
"Lorenzo",
""
],
[
"Colomba",
"Luca",
""
],
[
"Attanasio",
"Giuseppe",
""
],
[
"Pastor",
"Eliana",
""
],
[
"Cagliero",
"Luca",
""
],
[
"Baralis",
"Elena",
""
]
] |
new_dataset
| 0.999899 |
2306.08505
|
Mohammad Mahdi Abdollah Pour Mr
|
Griffin Floto, Mohammad Mahdi Abdollah Pour, Parsa Farinneya, Zhenwei
Tang, Ali Pesaranghader, Manasa Bharadwaj, Scott Sanner
|
DiffuDetox: A Mixed Diffusion Model for Text Detoxification
|
7 pages, 1 figure, ACL findings 2023
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text detoxification is a conditional text generation task aiming to remove
offensive content from toxic text. It is highly useful for online forums and
social media, where offensive content is frequently encountered. Intuitively,
there are diverse ways to detoxify sentences while preserving their meanings,
and we can select from detoxified sentences before displaying text to users.
Conditional diffusion models are particularly suitable for this task given
their demonstrated higher generative diversity than existing conditional text
generation models based on language models. Nonetheless, text fluency declines
when they are trained with insufficient data, which is the case for this task.
In this work, we propose DiffuDetox, a mixed conditional and unconditional
diffusion model for text detoxification. The conditional model takes toxic text
as the condition and reduces its toxicity, yielding a diverse set of detoxified
sentences. The unconditional model is trained to recover the input text, which
allows the introduction of additional fluent text for training and thus ensures
text fluency. Extensive experimental results and in-depth analysis demonstrate
the effectiveness of our proposed DiffuDetox.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 13:41:23 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Floto",
"Griffin",
""
],
[
"Pour",
"Mohammad Mahdi Abdollah",
""
],
[
"Farinneya",
"Parsa",
""
],
[
"Tang",
"Zhenwei",
""
],
[
"Pesaranghader",
"Ali",
""
],
[
"Bharadwaj",
"Manasa",
""
],
[
"Sanner",
"Scott",
""
]
] |
new_dataset
| 0.9785 |
2306.08526
|
Erion \c{C}ano
|
Erion \c{C}ano
|
AlbMoRe: A Corpus of Movie Reviews for Sentiment Analysis in Albanian
|
4 pages, 3 tables
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lack of available resources such as text corpora for low-resource languages
seriously hinders research on natural language processing and computational
linguistics. This paper presents AlbMoRe, a corpus of 800 sentiment annotated
movie reviews in Albanian. Each text is labeled as positive or negative and can
be used for sentiment analysis research. Preliminary results based on
traditional machine learning classifiers trained with the AlbMoRe samples are
also reported. They can serve as comparison baselines for future research
experiments.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 14:21:55 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Çano",
"Erion",
""
]
] |
new_dataset
| 0.998442 |
2306.08531
|
Fernando Amodeo
|
Fernando Amodeo, No\'e P\'erez-Higueras, Luis Merino, Fernando
Caballero
|
FROG: A new people detection dataset for knee-high 2D range finders
|
Code and data are publicly available at:
https://github.com/robotics-upo/2DLaserPeopleBenchmark
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Mobile robots require knowledge of the environment, especially of humans
located in its vicinity. While the most common approaches for detecting humans
involve computer vision, an often overlooked hardware feature of robots for
people detection are their 2D range finders. These were originally intended for
obstacle avoidance and mapping/SLAM tasks. In most robots, they are
conveniently located at a height approximately between the ankle and the knee,
so they can be used for detecting people too, and with a larger field of view
and depth resolution compared to cameras.
In this paper, we present a new dataset for people detection using knee-high
2D range finders called FROG. This dataset has greater laser resolution,
scanning frequency, and more complete annotation data compared to existing
datasets such as DROW. Particularly, the FROG dataset contains annotations for
100% of its laser scans (unlike DROW which only annotates 5%), 17x more
annotated scans, 100x more people annotations, and over twice the distance
traveled by the robot. We propose a benchmark based on the FROG dataset, and
analyze a collection of state-of-the-art people detectors based on 2D range
finder data.
We also propose and evaluate a new end-to-end deep learning approach for
people detection. Our solution works with the raw sensor data directly (not
needing hand-crafted input data features), thus avoiding CPU preprocessing and
releasing the developer of understanding specific domain heuristics.
Experimental results show how the proposed people detector attains results
comparable to the state of the art, while an optimized implementation for ROS
can operate at more than 500 Hz.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 14:24:10 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Amodeo",
"Fernando",
""
],
[
"Pérez-Higueras",
"Noé",
""
],
[
"Merino",
"Luis",
""
],
[
"Caballero",
"Fernando",
""
]
] |
new_dataset
| 0.999882 |
2306.08594
|
Yannick Schmitz
|
Yannick Schmitz and Egon Wanke
|
The directed metric dimension of directed co-graphs
| null | null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A vertex $w$ resolves two vertices $u$ and $v$ in a directed graph $G$ if the
distance from $w$ to $u$ is different to the distance from $w$ to $v$. A set of
vertices $R$ is a resolving set for a directed graph $G$ if for every pair of
vertices $u, v$ which are not in $R$ there is at least one vertex in $R$ that
resolves $u$ and $v$ in $G$. The directed metric dimension of a directed graph
$G$ is the size of a minimum resolving set for $G$. The decision problem
Directed Metric Dimension for a given directed graph $G$ and a given number $k$
is the question whether $G$ has a resolving set of size at most $k$. In this
paper, we study directed co-graphs. We introduce a linear time algorithm for
computing a minimum resolving set for directed co-graphs and show that Directed
Metric Dimension already is NP-complete for directed acyclic graphs.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 15:55:11 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Schmitz",
"Yannick",
""
],
[
"Wanke",
"Egon",
""
]
] |
new_dataset
| 0.998609 |
2306.08595
|
Jos\'e Ram\'on Pareja Monturiol
|
Jos\'e Ram\'on Pareja Monturiol, David P\'erez-Garc\'ia, Alejandro
Pozas-Kerstjens
|
TensorKrowch: Smooth integration of tensor networks in machine learning
|
17 pages, 2 figures, RevTex4.2. The TensorKrowch GitHub repository is
in https://github.com/joserapa98/tensorkrowch and the TensorKrowch
documentation is in https://joserapa98.github.io/tensorkrowch
| null | null | null |
cs.LG cond-mat.stat-mech cond-mat.str-el quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tensor networks are factorizations of high-dimensional tensors into networks
of smaller tensors. They have applications in physics and mathematics, and
recently have been proposed as promising machine learning architectures. To
ease the integration of tensor networks in machine learning pipelines, we
introduce TensorKrowch, an open source Python library built on top of PyTorch.
Providing a user-friendly interface, TensorKrowch allows users to construct any
tensor network, train it, and integrate it as a layer in more intricate deep
learning models. In this paper, we describe the main functionality and basic
usage of TensorKrowch, and provide technical details on its building blocks and
the optimizations performed to achieve efficient operation.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 15:55:19 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Monturiol",
"José Ramón Pareja",
""
],
[
"Pérez-García",
"David",
""
],
[
"Pozas-Kerstjens",
"Alejandro",
""
]
] |
new_dataset
| 0.996685 |
2306.08609
|
Federico Semeraro
|
Federico Semeraro, Alexandre Quintart, Sergio Fraile Izquierdo, Joseph
C. Ferguson
|
TomoSAM: a 3D Slicer extension using SAM for tomography segmentation
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
TomoSAM has been developed to integrate the cutting-edge Segment Anything
Model (SAM) into 3D Slicer, a highly capable software platform used for 3D
image processing and visualization. SAM is a promptable deep learning model
that is able to identify objects and create image masks in a zero-shot manner,
based only on a few user clicks. The synergy between these tools aids in the
segmentation of complex 3D datasets from tomography or other imaging
techniques, which would otherwise require a laborious manual segmentation
process. The source code associated with this article can be found at
https://github.com/fsemerar/SlicerTomoSAM
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 16:13:27 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Semeraro",
"Federico",
""
],
[
"Quintart",
"Alexandre",
""
],
[
"Izquierdo",
"Sergio Fraile",
""
],
[
"Ferguson",
"Joseph C.",
""
]
] |
new_dataset
| 0.999771 |
2306.08625
|
Zhenghang Yuan
|
Zhenghang Yuan, Lichao Mou, Yuansheng Hua, Xiao Xiang Zhu
|
RRSIS: Referring Remote Sensing Image Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Localizing desired objects from remote sensing images is of great use in
practical applications. Referring image segmentation, which aims at segmenting
out the objects to which a given expression refers, has been extensively
studied in natural images. However, almost no research attention is given to
this task of remote sensing imagery. Considering its potential for real-world
applications, in this paper, we introduce referring remote sensing image
segmentation (RRSIS) to fill in this gap and make some insightful explorations.
Specifically, we create a new dataset, called RefSegRS, for this task, enabling
us to evaluate different methods. Afterward, we benchmark referring image
segmentation methods of natural images on the RefSegRS dataset and find that
these models show limited efficacy in detecting small and scattered objects. To
alleviate this issue, we propose a language-guided cross-scale enhancement
(LGCE) module that utilizes linguistic features to adaptively enhance
multi-scale visual features by integrating both deep and shallow features. The
proposed dataset, benchmarking results, and the designed LGCE module provide
insights into the design of a better RRSIS model. We will make our dataset and
code publicly available.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 16:40:19 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Yuan",
"Zhenghang",
""
],
[
"Mou",
"Lichao",
""
],
[
"Hua",
"Yuansheng",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
] |
new_dataset
| 0.999779 |
2306.08649
|
Quentin Delfosse
|
Quentin Delfosse, Jannis Bl\"uml, Bjarne Gregori, Sebastian
Sztwiertnia, Kristian Kersting
|
OCAtari: Object-Centric Atari 2600 Reinforcement Learning Environments
|
26 pages, 9 main paper pages, 14 appendix pages. In main paper: 5
figures, 2 tables
| null | null | null |
cs.LG cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Cognitive science and psychology suggest that object-centric representations
of complex scenes are a promising step towards enabling efficient abstract
reasoning from low-level perceptual features. Yet, most deep reinforcement
learning approaches rely on only pixel-based representations that do not
capture the compositional properties of natural scenes. For this, we need
environments and datasets that allow us to work and evaluate object-centric
approaches. We present OCAtari, a set of environment that provides
object-centric state representations of Atari games, the most-used evaluation
framework for deep RL approaches. OCAtari also allows for RAM state
manipulations of the games to change and create specific or even novel
situations. The code base for this work is available at
github.com/k4ntz/OC_Atari.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 17:28:46 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Delfosse",
"Quentin",
""
],
[
"Blüml",
"Jannis",
""
],
[
"Gregori",
"Bjarne",
""
],
[
"Sztwiertnia",
"Sebastian",
""
],
[
"Kersting",
"Kristian",
""
]
] |
new_dataset
| 0.999721 |
2306.08657
|
Mijanur Palash
|
Mijanur Palash, Bharat Bhargava
|
EMERSK -- Explainable Multimodal Emotion Recognition with Situational
Knowledge
|
Emotion Recognition, Deep Learning, Multi-modal, Convolutional neural
network (CNN), LSTM, Situational-Knowledge, Novelty
| null | null | null |
cs.CV cs.LG cs.MM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Automatic emotion recognition has recently gained significant attention due
to the growing popularity of deep learning algorithms. One of the primary
challenges in emotion recognition is effectively utilizing the various cues
(modalities) available in the data. Another challenge is providing a proper
explanation of the outcome of the learning.To address these challenges, we
present Explainable Multimodal Emotion Recognition with Situational Knowledge
(EMERSK), a generalized and modular system for human emotion recognition and
explanation using visual information. Our system can handle multiple
modalities, including facial expressions, posture, and gait, in a flexible and
modular manner. The network consists of different modules that can be added or
removed depending on the available data. We utilize a two-stream network
architecture with convolutional neural networks (CNNs) and encoder-decoder
style attention mechanisms to extract deep features from face images.
Similarly, CNNs and recurrent neural networks (RNNs) with Long Short-term
Memory (LSTM) are employed to extract features from posture and gait data. We
also incorporate deep features from the background as contextual information
for the learning process. The deep features from each module are fused using an
early fusion network. Furthermore, we leverage situational knowledge derived
from the location type and adjective-noun pair (ANP) extracted from the scene,
as well as the spatio-temporal average distribution of emotions, to generate
explanations. Ablation studies demonstrate that each sub-network can
independently perform emotion recognition, and combining them in a multimodal
approach significantly improves overall recognition performance. Extensive
experiments conducted on various benchmark datasets, including GroupWalk,
validate the superior performance of our approach compared to other
state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 17:52:37 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Palash",
"Mijanur",
""
],
[
"Bhargava",
"Bharat",
""
]
] |
new_dataset
| 0.983036 |
2306.08658
|
Gregor Geigle
|
Gregor Geigle, Radu Timofte, Goran Glava\v{s}
|
Babel-ImageNet: Massively Multilingual Evaluation of Vision-and-Language
Representations
| null | null | null | null |
cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision-and-language (VL) models with separate encoders for each modality
(e.g., CLIP) have become the go-to models for zero-shot image classification
and image-text retrieval. The bulk of the evaluation of these models is,
however, performed with English text only: the costly creation of
language-specific image-caption datasets has limited multilingual VL benchmarks
to a handful of high-resource languages. In this work, we introduce
Babel-ImageNet, a massively multilingual benchmark that offers (partial)
translations of 1000 ImageNet labels to 92 languages, built without resorting
to machine translation (MT) or requiring manual annotation. We instead
automatically obtain reliable translations of ImageNext concepts by linking
them -- via shared WordNet synsets -- to BabelNet, a massively multilingual
lexico-semantic network. We evaluate 8 different publicly available
multilingual CLIP models on zero-shot image classification (ZS-IC) for each of
the 92 Babel-ImageNet languages, demonstrating a significant gap between
English ImageNet performance and that of high-resource languages (e.g., German
or Chinese), and an even bigger gap for low-resource languages (e.g., Sinhala
or Lao). Crucially, we show that the models' ZS-IC performance on
Babel-ImageNet highly correlates with their performance in image-text
retrieval, validating that Babel-ImageNet is suitable for estimating the
quality of the multilingual VL representation spaces for the vast majority of
languages that lack gold image-text data. Finally, we show that the performance
of multilingual CLIP for low-resource languages can be drastically improved via
cheap, parameter-efficient language-specific training. We make our code and
data publicly available: \url{https://github.com/gregor-ge/Babel-ImageNet}
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 17:53:06 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Geigle",
"Gregor",
""
],
[
"Timofte",
"Radu",
""
],
[
"Glavaš",
"Goran",
""
]
] |
new_dataset
| 0.999485 |
2306.08666
|
Zhengliang Liu
|
Zhengliang Liu, Aoxiao Zhong, Yiwei Li, Longtao Yang, Chao Ju, Zihao
Wu, Chong Ma, Peng Shu, Cheng Chen, Sekeun Kim, Haixing Dai, Lin Zhao,
Dajiang Zhu, Jun Liu, Wei Liu, Dinggang Shen, Xiang Li, Quanzheng Li,
Tianming Liu
|
Radiology-GPT: A Large Language Model for Radiology
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce Radiology-GPT, a large language model for radiology. Using an
instruction tuning approach on an extensive dataset of radiology domain
knowledge, Radiology-GPT demonstrates superior performance compared to general
language models such as StableLM, Dolly and LLaMA. It exhibits significant
versatility in radiological diagnosis, research, and communication. This work
serves as a catalyst for future developments in clinical NLP. The successful
implementation of Radiology-GPT is indicative of the potential of localizing
generative large language models, specifically tailored for distinctive medical
specialties, while ensuring adherence to privacy standards such as HIPAA. The
prospect of developing individualized, large-scale language models that cater
to specific needs of various hospitals presents a promising direction. The
fusion of conversational competence and domain-specific knowledge in these
models is set to foster future development in healthcare AI. A demo of
Radiology-GPT is available at
https://huggingface.co/spaces/allen-eric/radiology-gpt.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 17:57:24 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Liu",
"Zhengliang",
""
],
[
"Zhong",
"Aoxiao",
""
],
[
"Li",
"Yiwei",
""
],
[
"Yang",
"Longtao",
""
],
[
"Ju",
"Chao",
""
],
[
"Wu",
"Zihao",
""
],
[
"Ma",
"Chong",
""
],
[
"Shu",
"Peng",
""
],
[
"Chen",
"Cheng",
""
],
[
"Kim",
"Sekeun",
""
],
[
"Dai",
"Haixing",
""
],
[
"Zhao",
"Lin",
""
],
[
"Zhu",
"Dajiang",
""
],
[
"Liu",
"Jun",
""
],
[
"Liu",
"Wei",
""
],
[
"Shen",
"Dinggang",
""
],
[
"Li",
"Xiang",
""
],
[
"Li",
"Quanzheng",
""
],
[
"Liu",
"Tianming",
""
]
] |
new_dataset
| 0.982398 |
2306.08701
|
Adithya Hegde
|
Kinar S, Prashanth K V, Adithya Hegde, Aditya Subrahmanya Bhat,
Narender M
|
Transpiling RTL Pseudo-code of the POWER Instruction Set Architecture to
C for Real-time Performance Analysis on Cavatools Simulator
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a transpiler framework for converting RTL pseudo code of
the POWER Instruction Set Architecture (ISA) to C code, enabling its execution
on the Cavatools simulator. The transpiler consists of a lexer and parser,
which parse the RTL pseudo code and generate corresponding C code
representations. The lexer tokenizes the input code, while the parser applies
grammar rules to build an abstract syntax tree (AST). The transpiler ensures
compatibility with the Cavatools simulator by generating C code that adheres to
its requirements. The resulting C code can be executed on the Cavatools
simulator, allowing developers to analyze the instruction-level performance of
the Power ISA in real time. The proposed framework facilitates the seamless
integration of RTL pseudo code into the Cavatools ecosystem, enabling
comprehensive performance analysis and optimization of Power ISA-based code.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 18:53:14 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"S",
"Kinar",
""
],
[
"K",
"Prashanth",
"V"
],
[
"Hegde",
"Adithya",
""
],
[
"Bhat",
"Aditya Subrahmanya",
""
],
[
"M",
"Narender",
""
]
] |
new_dataset
| 0.984034 |
2306.08731
|
Dima Damen
|
Vadim Tschernezki, Ahmad Darkhalil, Zhifan Zhu, David Fouhey, Iro
Laina, Diane Larlus, Dima Damen, Andrea Vedaldi
|
EPIC Fields: Marrying 3D Geometry and Video Understanding
|
20 pages, 16 figures. Project Webpage:
http://epic-kitchens.github.io/epic-fields
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural rendering is fuelling a unification of learning, 3D geometry and video
understanding that has been waiting for more than two decades. Progress,
however, is still hampered by a lack of suitable datasets and benchmarks. To
address this gap, we introduce EPIC Fields, an augmentation of EPIC-KITCHENS
with 3D camera information. Like other datasets for neural rendering, EPIC
Fields removes the complex and expensive step of reconstructing cameras using
photogrammetry, and allows researchers to focus on modelling problems. We
illustrate the challenge of photogrammetry in egocentric videos of dynamic
actions and propose innovations to address them. Compared to other neural
rendering datasets, EPIC Fields is better tailored to video understanding
because it is paired with labelled action segments and the recent VISOR segment
annotations. To further motivate the community, we also evaluate two benchmark
tasks in neural rendering and segmenting dynamic objects, with strong baselines
that showcase what is not possible today. We also highlight the advantage of
geometry in semi-supervised video object segmentations on the VISOR
annotations. EPIC Fields reconstructs 96% of videos in EPICKITCHENS,
registering 19M frames in 99 hours recorded in 45 kitchens.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 20:33:49 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Tschernezki",
"Vadim",
""
],
[
"Darkhalil",
"Ahmad",
""
],
[
"Zhu",
"Zhifan",
""
],
[
"Fouhey",
"David",
""
],
[
"Laina",
"Iro",
""
],
[
"Larlus",
"Diane",
""
],
[
"Damen",
"Dima",
""
],
[
"Vedaldi",
"Andrea",
""
]
] |
new_dataset
| 0.995826 |
2306.08734
|
Samuel D McDermott
|
Samuel D. McDermott, M. Voetberg, Brian Nord
|
WavPool: A New Block for Deep Neural Networks
|
8+8 pages, 3+3 figures
| null | null |
FERMILAB-CONF-23-278-CSAID
|
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern deep neural networks comprise many operational layers, such as dense
or convolutional layers, which are often collected into blocks. In this work,
we introduce a new, wavelet-transform-based network architecture that we call
the multi-resolution perceptron: by adding a pooling layer, we create a new
network block, the WavPool. The first step of the multi-resolution perceptron
is transforming the data into its multi-resolution decomposition form by
convolving the input data with filters of fixed coefficients but increasing
size. Following image processing techniques, we are able to make scale and
spatial information simultaneously accessible to the network without increasing
the size of the data vector. WavPool outperforms a similar multilayer
perceptron while using fewer parameters, and outperforms a comparable
convolutional neural network by ~ 10% on relative accuracy on CIFAR-10.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 20:35:01 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"McDermott",
"Samuel D.",
""
],
[
"Voetberg",
"M.",
""
],
[
"Nord",
"Brian",
""
]
] |
new_dataset
| 0.999536 |
2306.08772
|
Vladislav Kurenkov
|
Vladislav Kurenkov, Alexander Nikulin, Denis Tarasov, Sergey
Kolesnikov
|
Katakomba: Tools and Benchmarks for Data-Driven NetHack
|
Source code at https://github.com/tinkoff-ai/katakomba
| null | null | null |
cs.LG cs.AI cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
NetHack is known as the frontier of reinforcement learning research where
learning-based methods still need to catch up to rule-based solutions. One of
the promising directions for a breakthrough is using pre-collected datasets
similar to recent developments in robotics, recommender systems, and more under
the umbrella of offline reinforcement learning (ORL). Recently, a large-scale
NetHack dataset was released; while it was a necessary step forward, it has yet
to gain wide adoption in the ORL community. In this work, we argue that there
are three major obstacles for adoption: tool-wise, implementation-wise, and
benchmark-wise. To address them, we develop an open-source library that
provides workflow fundamentals familiar to the ORL community: pre-defined
D4RL-style tasks, uncluttered baseline implementations, and reliable evaluation
tools with accompanying configs and logs synced to the cloud.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 22:50:25 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Kurenkov",
"Vladislav",
""
],
[
"Nikulin",
"Alexander",
""
],
[
"Tarasov",
"Denis",
""
],
[
"Kolesnikov",
"Sergey",
""
]
] |
new_dataset
| 0.999474 |
2306.08807
|
Bhargav Chandaka
|
Yuan Shen, Bhargav Chandaka, Zhi-hao Lin, Albert Zhai, Hang Cui, David
Forsyth and Shenlong Wang
|
Sim-on-Wheels: Physical World in the Loop Simulation for Self-Driving
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present Sim-on-Wheels, a safe, realistic, and vehicle-in-loop framework to
test autonomous vehicles' performance in the real world under safety-critical
scenarios. Sim-on-wheels runs on a self-driving vehicle operating in the
physical world. It creates virtual traffic participants with risky behaviors
and seamlessly inserts the virtual events into images perceived from the
physical world in real-time. The manipulated images are fed into autonomy,
allowing the self-driving vehicle to react to such virtual events. The full
pipeline runs on the actual vehicle and interacts with the physical world, but
the safety-critical events it sees are virtual. Sim-on-Wheels is safe,
interactive, realistic, and easy to use. The experiments demonstrate the
potential of Sim-on-Wheels to facilitate the process of testing autonomous
driving in challenging real-world scenes with high fidelity and low risk.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 01:49:42 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Shen",
"Yuan",
""
],
[
"Chandaka",
"Bhargav",
""
],
[
"Lin",
"Zhi-hao",
""
],
[
"Zhai",
"Albert",
""
],
[
"Cui",
"Hang",
""
],
[
"Forsyth",
"David",
""
],
[
"Wang",
"Shenlong",
""
]
] |
new_dataset
| 0.999478 |
2306.08834
|
Jian-Wei Zhang
|
Wei Zhang, Jason K. Wong, Yitian Chen, Ailing Jia, Luwei Wang,
Jian-Wei Zhang, Lechao Cheng, and Wei Chen
|
ScrollTimes: Tracing the Provenance of Paintings as a Window into
History
|
Tech Report, 11 pages, 7 figures
| null | null | null |
cs.HC cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Digital humanities research has flourished due to the diverse artifacts
available in cultural heritage databases. However, over-reliance on a single
artifact type can result in poor contextualization and a constrained
understanding of historical context. We collaborated with art historians to
examine handscrolls, a form of traditional Chinese painting which offers a
wealth of data for historical analysis and provides a unique opportunity for
understanding history through artwork. We propose ScrollTimes, a visual
analysis system for tracing handscroll historic context by linking multiple
data sources. Specifically, a unique layout is developed for efficiently
viewing long handscrolls. Using image processing techniques and language
models, we extract, verify, and supplement elements in handscrolls with
different cultural heritage databases. Furthermore, interactive biographies are
constructed for handscrolls to uncover their historical narratives, provenance
trajectories, and artistic legacies. Validated through case studies and expert
interviews, our approach offers a window into history, fostering a holistic
understanding of handscroll provenance and historical significance.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 03:38:09 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Zhang",
"Wei",
""
],
[
"Wong",
"Jason K.",
""
],
[
"Chen",
"Yitian",
""
],
[
"Jia",
"Ailing",
""
],
[
"Wang",
"Luwei",
""
],
[
"Zhang",
"Jian-Wei",
""
],
[
"Cheng",
"Lechao",
""
],
[
"Chen",
"Wei",
""
]
] |
new_dataset
| 0.996866 |
2306.08839
|
Federica Spinola
|
Federica Spinola, Philipp Benz, Minhyeong Yu, Tae-hoon Kim
|
Knowledge Assembly: Semi-Supervised Multi-Task Learning from Multiple
Datasets with Disjoint Labels
|
Accepted at CVPRW'23
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In real-world scenarios we often need to perform multiple tasks
simultaneously. Multi-Task Learning (MTL) is an adequate method to do so, but
usually requires datasets labeled for all tasks. We propose a method that can
leverage datasets labeled for only some of the tasks in the MTL framework. Our
work, Knowledge Assembly (KA), learns multiple tasks from disjoint datasets by
leveraging the unlabeled data in a semi-supervised manner, using model
augmentation for pseudo-supervision. Whilst KA can be implemented on any
existing MTL networks, we test our method on jointly learning person
re-identification (reID) and pedestrian attribute recognition (PAR). We surpass
the single task fully-supervised performance by $4.2\%$ points for reID and
$0.9\%$ points for PAR.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 04:05:03 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Spinola",
"Federica",
""
],
[
"Benz",
"Philipp",
""
],
[
"Yu",
"Minhyeong",
""
],
[
"Kim",
"Tae-hoon",
""
]
] |
new_dataset
| 0.953232 |
2306.08843
|
Wanyuan Wang
|
Wanyuan Wang, Tianchi Qiao, Jinming Ma, Jiahui Jin, Zhibin Li, Weiwei
Wu, and Yichuan Jian
|
Real-Time Network-Level Traffic Signal Control: An Explicit Multiagent
Coordination Method
| null | null | null | null |
cs.AI cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
Efficient traffic signal control (TSC) has been one of the most useful ways
for reducing urban road congestion. Key to the challenge of TSC includes 1) the
essential of real-time signal decision, 2) the complexity in traffic dynamics,
and 3) the network-level coordination. Recent efforts that applied
reinforcement learning (RL) methods can query policies by mapping the traffic
state to the signal decision in real-time, however, is inadequate for
unexpected traffic flows. By observing real traffic information, online
planning methods can compute the signal decisions in a responsive manner. We
propose an explicit multiagent coordination (EMC)-based online planning methods
that can satisfy adaptive, real-time and network-level TSC. By multiagent, we
model each intersection as an autonomous agent, and the coordination efficiency
is modeled by a cost (i.e., congestion index) function between neighbor
intersections. By network-level coordination, each agent exchanges messages
with respect to cost function with its neighbors in a fully decentralized
manner. By real-time, the message passing procedure can interrupt at any time
when the real time limit is reached and agents select the optimal signal
decisions according to the current message. Moreover, we prove our EMC method
can guarantee network stability by borrowing ideas from transportation domain.
Finally, we test our EMC method in both synthetic and real road network
datasets. Experimental results are encouraging: compared to RL and conventional
transportation baselines, our EMC method performs reasonably well in terms of
adapting to real-time traffic dynamics, minimizing vehicle travel time and
scalability to city-scale road networks.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 04:08:09 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Wang",
"Wanyuan",
""
],
[
"Qiao",
"Tianchi",
""
],
[
"Ma",
"Jinming",
""
],
[
"Jin",
"Jiahui",
""
],
[
"Li",
"Zhibin",
""
],
[
"Wu",
"Weiwei",
""
],
[
"Jian",
"Yichuan",
""
]
] |
new_dataset
| 0.996277 |
2306.08871
|
Yanshen Sun
|
Yanshen Sun, Jianfeng He, Shuo Lei, Limeng Cui, Chang-Tien Lu
|
Med-MMHL: A Multi-Modal Dataset for Detecting Human- and LLM-Generated
Misinformation in the Medical Domain
| null | null | null | null |
cs.SI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The pervasive influence of misinformation has far-reaching and detrimental
effects on both individuals and society. The COVID-19 pandemic has witnessed an
alarming surge in the dissemination of medical misinformation. However,
existing datasets pertaining to misinformation predominantly focus on textual
information, neglecting the inclusion of visual elements, and tend to center
solely on COVID-19-related misinformation, overlooking misinformation
surrounding other diseases. Furthermore, the potential of Large Language Models
(LLMs), such as the ChatGPT developed in late 2022, in generating
misinformation has been overlooked in previous works. To overcome these
limitations, we present Med-MMHL, a novel multi-modal misinformation detection
dataset in a general medical domain encompassing multiple diseases. Med-MMHL
not only incorporates human-generated misinformation but also includes
misinformation generated by LLMs like ChatGPT. Our dataset aims to facilitate
comprehensive research and development of methodologies for detecting
misinformation across diverse diseases and various scenarios, including human
and LLM-generated misinformation detection at the sentence, document, and
multi-modal levels. To access our dataset and code, visit our GitHub
repository: \url{https://github.com/styxsys0927/Med-MMHL}.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 05:59:11 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Sun",
"Yanshen",
""
],
[
"He",
"Jianfeng",
""
],
[
"Lei",
"Shuo",
""
],
[
"Cui",
"Limeng",
""
],
[
"Lu",
"Chang-Tien",
""
]
] |
new_dataset
| 0.999387 |
2306.08888
|
Srivatsan Krishnan
|
Srivatsan Krishnan, Amir Yazdanbaksh, Shvetank Prakash, Jason Jabbour,
Ikechukwu Uchendu, Susobhan Ghosh, Behzad Boroujerdian, Daniel Richins,
Devashree Tripathy, Aleksandra Faust, Vijay Janapa Reddi
|
ArchGym: An Open-Source Gymnasium for Machine Learning Assisted
Architecture Design
|
International Symposium on Computer Architecture (ISCA 2023)
| null | null | null |
cs.AR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning is a prevalent approach to tame the complexity of design
space exploration for domain-specific architectures. Using ML for design space
exploration poses challenges. First, it's not straightforward to identify the
suitable algorithm from an increasing pool of ML methods. Second, assessing the
trade-offs between performance and sample efficiency across these methods is
inconclusive. Finally, lack of a holistic framework for fair, reproducible, and
objective comparison across these methods hinders progress of adopting ML-aided
architecture design space exploration and impedes creating repeatable
artifacts. To mitigate these challenges, we introduce ArchGym, an open-source
gym and easy-to-extend framework that connects diverse search algorithms to
architecture simulators. To demonstrate utility, we evaluate ArchGym across
multiple vanilla and domain-specific search algorithms in designing custom
memory controller, deep neural network accelerators, and custom SoC for AR/VR
workloads, encompassing over 21K experiments. Results suggest that with
unlimited samples, ML algorithms are equally favorable to meet user-defined
target specification if hyperparameters are tuned; no solution is necessarily
better than another (e.g., reinforcement learning vs. Bayesian methods). We
coin the term hyperparameter lottery to describe the chance for a search
algorithm to find an optimal design provided meticulously selected
hyperparameters. The ease of data collection and aggregation in ArchGym
facilitates research in ML-aided architecture design space exploration. As a
case study, we show this advantage by developing a proxy cost model with an
RMSE of 0.61% that offers a 2,000-fold reduction in simulation time. Code and
data for ArchGym is available at https://bit.ly/ArchGym.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 06:41:23 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Krishnan",
"Srivatsan",
""
],
[
"Yazdanbaksh",
"Amir",
""
],
[
"Prakash",
"Shvetank",
""
],
[
"Jabbour",
"Jason",
""
],
[
"Uchendu",
"Ikechukwu",
""
],
[
"Ghosh",
"Susobhan",
""
],
[
"Boroujerdian",
"Behzad",
""
],
[
"Richins",
"Daniel",
""
],
[
"Tripathy",
"Devashree",
""
],
[
"Faust",
"Aleksandra",
""
],
[
"Reddi",
"Vijay Janapa",
""
]
] |
new_dataset
| 0.987127 |
2306.08893
|
Orr Zohar Mr
|
Orr Zohar, Shih-Cheng Huang, Kuan-Chieh Wang, Serena Yeung
|
LOVM: Language-Only Vision Model Selection
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Pre-trained multi-modal vision-language models (VLMs) are becoming
increasingly popular due to their exceptional performance on downstream vision
applications, particularly in the few- and zero-shot settings. However,
selecting the best-performing VLM for some downstream applications is
non-trivial, as it is dataset and task-dependent. Meanwhile, the exhaustive
evaluation of all available VLMs on a novel application is not only time and
computationally demanding but also necessitates the collection of a labeled
dataset for evaluation. As the number of open-source VLM variants increases,
there is a need for an efficient model selection strategy that does not require
access to a curated evaluation dataset. This paper proposes a novel task and
benchmark for efficiently evaluating VLMs' zero-shot performance on downstream
applications without access to the downstream task dataset. Specifically, we
introduce a new task LOVM: Language-Only Vision Model Selection, where methods
are expected to perform both model selection and performance prediction based
solely on a text description of the desired downstream application. We then
introduced an extensive LOVM benchmark consisting of ground-truth evaluations
of 35 pre-trained VLMs and 23 datasets, where methods are expected to rank the
pre-trained VLMs and predict their zero-shot performance.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 06:53:05 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Zohar",
"Orr",
""
],
[
"Huang",
"Shih-Cheng",
""
],
[
"Wang",
"Kuan-Chieh",
""
],
[
"Yeung",
"Serena",
""
]
] |
new_dataset
| 0.994348 |
2306.08894
|
Alena Chang
|
Alena Chang, Yinxin Wan, Guoliang Xue, Arunabha Sen
|
Entanglement Distribution in Satellite-based Dynamic Quantum Networks
| null | null | null | null |
cs.NI quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Low Earth Orbit (LEO) satellites present a compelling opportunity for the
establishment of a global quantum information network. However, satellite-based
entanglement distribution from a networking perspective has not been fully
investigated. Existing works often do not account for satellite movement over
time when distributing entanglement and/or often do not permit entanglement
distribution along inter-satellite links, which are two shortcomings we address
in this paper. We first define a system model which considers both satellite
movement over time and inter-satellite links. We next formulate the optimal
entanglement distribution (OED) problem under this system model and show how to
convert the OED problem in a dynamic physical network to one in a static
logical graph which can be used to solve the OED problem in the dynamic
physical network. We then propose a polynomial time greedy algorithm for
computing satellite-assisted multi-hop entanglement paths. We also design an
integer linear programming (ILP)-based algorithm to compute optimal solutions
as a baseline to study the performance of our greedy algorithm. We present
evaluation results to demonstrate the advantage of our model and algorithms.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 06:56:26 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Chang",
"Alena",
""
],
[
"Wan",
"Yinxin",
""
],
[
"Xue",
"Guoliang",
""
],
[
"Sen",
"Arunabha",
""
]
] |
new_dataset
| 0.982804 |
2306.08928
|
Nicola Marchetti Prof
|
Indrakshi Dey, Nicola Marchetti, Marcello Caleffi, Angela Sara
Cacciapuoti
|
Quantum Game Theory meets Quantum Networks
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Classical game theory is a powerful tool focusing on optimized resource
distribution, allocation and sharing in classical wired and wireless networks.
As quantum networks are emerging as a means of providing true connectivity
between quantum computers, it is imperative and crucial to exploit game theory
for addressing challenges like entanglement distribution and access, routing,
topology extraction and inference for quantum networks. Quantum networks
provide the promising opportunity of employing quantum games owing to their
inherent capability of generating and sharing quantum states. Besides, quantum
games offer enhanced payoffs and winning probabilities, new strategies and
equilibria, which are unimaginable in classical games. Employing quantum game
theory to solve fundamental challenges in quantum networks opens a new
fundamental research direction necessitating inter-disciplinary efforts. In
this article, we introduce a novel game-theoretical framework for exploiting
quantum strategies to solve, as archetypal example, one of the key
functionality of a quantum network, namely, the entanglement distribution. We
compare the quantum strategies with classical ones by showing the quantum
advantages in terms of link fidelity improvement and latency decrease in
communication. In future, we will generalize our game framework to optimize
entanglement distribution and access over any quantum network topology. We will
also explore how quantum games can be leveraged to address other challenges
like routing, optimization of quantum operations and topology design.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 08:00:50 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Dey",
"Indrakshi",
""
],
[
"Marchetti",
"Nicola",
""
],
[
"Caleffi",
"Marcello",
""
],
[
"Cacciapuoti",
"Angela Sara",
""
]
] |
new_dataset
| 0.996148 |
2306.08951
|
Philipp Van Kempen
|
Philipp van Kempen, Rafael Stahl, Daniel Mueller-Gritschneder, Ulf
Schlichtmann
|
MLonMCU: TinyML Benchmarking with Fast Retargeting
|
CODAI 2022 Workshop - Embedded System Week (ESWeek)
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
While there exist many ways to deploy machine learning models on
microcontrollers, it is non-trivial to choose the optimal combination of
frameworks and targets for a given application. Thus, automating the end-to-end
benchmarking flow is of high relevance nowadays. A tool called MLonMCU is
proposed in this paper and demonstrated by benchmarking the state-of-the-art
TinyML frameworks TFLite for Microcontrollers and TVM effortlessly with a large
number of configurations in a low amount of time.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 08:44:35 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"van Kempen",
"Philipp",
""
],
[
"Stahl",
"Rafael",
""
],
[
"Mueller-Gritschneder",
"Daniel",
""
],
[
"Schlichtmann",
"Ulf",
""
]
] |
new_dataset
| 0.994856 |
2306.08963
|
Shengqi Xu
|
Shengqi Xu, Xueyao Xiao, Shuning Cao, Yi Chang, Luxin Yan
|
1st Solution Places for CVPR 2023 UG$^{\textbf{2}}$+ Challenge Track
2.1-Text Recognition through Atmospheric Turbulence
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this technical report, we present the solution developed by our team
VIELab-HUST for text recognition through atmospheric turbulence in Track 2.1 of
the CVPR 2023 UG$^{2}$+ challenge. Our solution involves an efficient
multi-stage framework that restores a high-quality image from distorted frames.
Specifically, a frame selection algorithm based on sharpness is first utilized
to select the sharpest set of distorted frames. Next, each frame in the
selected frames is aligned to suppress geometric distortion through
optical-flow-based image registration. Then, a region-based image fusion method
with DT-CWT is utilized to mitigate the blur caused by the turbulence. Finally,
a learning-based deartifacts method is applied to remove the artifacts in the
fused image, generating a high-quality outuput. Our framework can handle both
hot-air text dataset and turbulence text dataset provided in the final testing
phase and achieved 1st place in text recognition accuracy. Our code will be
available at https://github.com/xsqhust/Turbulence_Removal.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 08:56:51 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Xu",
"Shengqi",
""
],
[
"Xiao",
"Xueyao",
""
],
[
"Cao",
"Shuning",
""
],
[
"Chang",
"Yi",
""
],
[
"Yan",
"Luxin",
""
]
] |
new_dataset
| 0.986155 |
2306.09082
|
Federico Malato
|
Federico Malato, Florian Leopold, Ville Hautamaki, Andrew Melnik
|
Behavioral Cloning via Search in Embedded Demonstration Dataset
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Behavioural cloning uses a dataset of demonstrations to learn a behavioural
policy. To overcome various learning and policy adaptation problems, we propose
to use latent space to index a demonstration dataset, instantly access similar
relevant experiences, and copy behavior from these situations. Actions from a
selected similar situation can be performed by the agent until representations
of the agent's current situation and the selected experience diverge in the
latent space. Thus, we formulate our control problem as a search problem over a
dataset of experts' demonstrations. We test our approach on BASALT
MineRL-dataset in the latent representation of a Video PreTraining model. We
compare our model to state-of-the-art Minecraft agents. Our approach can
effectively recover meaningful demonstrations and show human-like behavior of
an agent in the Minecraft environment in a wide variety of scenarios.
Experimental results reveal that performance of our search-based approach is
comparable to trained models, while allowing zero-shot task adaptation by
changing the demonstration examples.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 12:25:41 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Malato",
"Federico",
""
],
[
"Leopold",
"Florian",
""
],
[
"Hautamaki",
"Ville",
""
],
[
"Melnik",
"Andrew",
""
]
] |
new_dataset
| 0.959947 |
2306.09093
|
Longyue Wang
|
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu,
Zefeng Du, Shuming Shi, Zhaopeng Tu
|
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and
Text Integration
|
Longyue Wang is the corresponding author. Our project page is at
https://github.com/lyuchenyang/Macaw-LLM
| null | null | null |
cs.CL cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Although instruction-tuned large language models (LLMs) have exhibited
remarkable capabilities across various NLP tasks, their effectiveness on other
data modalities beyond text has not been fully studied. In this work, we
propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual,
audio, and textual information. Macaw-LLM consists of three main components: a
modality module for encoding multi-modal data, a cognitive module for
harnessing pretrained LLMs, and an alignment module for harmonizing diverse
representations. Our novel alignment module seamlessly bridges multi-modal
features to textual features, simplifying the adaptation process from the
modality modules to the cognitive module. In addition, we construct a
large-scale multi-modal instruction dataset in terms of multi-turn dialogue,
including 69K image instances and 50K video instances. We have made our data,
code and model publicly available, which we hope can pave the way for future
research in multi-modal LLMs and expand the capabilities of LLMs to handle
diverse data modalities and address complex real-world scenarios.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 12:45:25 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Lyu",
"Chenyang",
""
],
[
"Wu",
"Minghao",
""
],
[
"Wang",
"Longyue",
""
],
[
"Huang",
"Xinting",
""
],
[
"Liu",
"Bingshuai",
""
],
[
"Du",
"Zefeng",
""
],
[
"Shi",
"Shuming",
""
],
[
"Tu",
"Zhaopeng",
""
]
] |
new_dataset
| 0.998802 |
2306.09109
|
Kevis-Kokitsi Maninis
|
Varun Jampani, Kevis-Kokitsi Maninis, Andreas Engelhardt, Arjun
Karpur, Karen Truong, Kyle Sargent, Stefan Popov, Andr\'e Araujo, Ricardo
Martin-Brualla, Kaushal Patel, Daniel Vlasic, Vittorio Ferrari, Ameesh
Makadia, Ce Liu, Yuanzhen Li, Howard Zhou
|
NAVI: Category-Agnostic Image Collections with High-Quality 3D Shape and
Pose Annotations
|
Project page: https://navidataset.github.io
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advances in neural reconstruction enable high-quality 3D object
reconstruction from casually captured image collections. Current techniques
mostly analyze their progress on relatively simple image collections where
Structure-from-Motion (SfM) techniques can provide ground-truth (GT) camera
poses. We note that SfM techniques tend to fail on in-the-wild image
collections such as image search results with varying backgrounds and
illuminations. To enable systematic research progress on 3D reconstruction from
casual image captures, we propose NAVI: a new dataset of category-agnostic
image collections of objects with high-quality 3D scans along with per-image
2D-3D alignments providing near-perfect GT camera parameters. These 2D-3D
alignments allow us to extract accurate derivative annotations such as dense
pixel correspondences, depth and segmentation maps. We demonstrate the use of
NAVI image collections on different problem settings and show that NAVI enables
more thorough evaluations that were not possible with existing datasets. We
believe NAVI is beneficial for systematic research progress on 3D
reconstruction and correspondence estimation. Project page:
https://navidataset.github.io
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 13:11:30 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Jampani",
"Varun",
""
],
[
"Maninis",
"Kevis-Kokitsi",
""
],
[
"Engelhardt",
"Andreas",
""
],
[
"Karpur",
"Arjun",
""
],
[
"Truong",
"Karen",
""
],
[
"Sargent",
"Kyle",
""
],
[
"Popov",
"Stefan",
""
],
[
"Araujo",
"André",
""
],
[
"Martin-Brualla",
"Ricardo",
""
],
[
"Patel",
"Kaushal",
""
],
[
"Vlasic",
"Daniel",
""
],
[
"Ferrari",
"Vittorio",
""
],
[
"Makadia",
"Ameesh",
""
],
[
"Liu",
"Ce",
""
],
[
"Li",
"Yuanzhen",
""
],
[
"Zhou",
"Howard",
""
]
] |
new_dataset
| 0.999584 |
2306.09124
|
Caixin Kang
|
Caixin Kang, Yinpeng Dong, Zhengyi Wang, Shouwei Ruan, Hang Su,
Xingxing Wei
|
DIFFender: Diffusion-Based Adversarial Defense against Patch Attacks in
the Physical World
| null | null | null | null |
cs.CV cs.AI cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adversarial attacks in the physical world, particularly patch attacks, pose
significant threats to the robustness and reliability of deep learning models.
Developing reliable defenses against patch attacks is crucial for real-world
applications, yet current research in this area is severely lacking. In this
paper, we propose DIFFender, a novel defense method that leverages the
pre-trained diffusion model to perform both localization and defense against
potential adversarial patch attacks. DIFFender is designed as a pipeline
consisting of two main stages: patch localization and restoration. In the
localization stage, we exploit the intriguing properties of a diffusion model
to effectively identify the locations of adversarial patches. In the
restoration stage, we employ a text-guided diffusion model to eliminate
adversarial regions in the image while preserving the integrity of the visual
content. Additionally, we design a few-shot prompt-tuning algorithm to
facilitate simple and efficient tuning, enabling the learned representations to
easily transfer to downstream tasks, which optimize two stages jointly. We
conduct extensive experiments on image classification and face recognition to
demonstrate that DIFFender exhibits superior robustness under strong adaptive
attacks and generalizes well across various scenarios, diverse classifiers, and
multiple attack methods.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 13:33:27 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Kang",
"Caixin",
""
],
[
"Dong",
"Yinpeng",
""
],
[
"Wang",
"Zhengyi",
""
],
[
"Ruan",
"Shouwei",
""
],
[
"Su",
"Hang",
""
],
[
"Wei",
"Xingxing",
""
]
] |
new_dataset
| 0.950017 |
2306.09126
|
Kazuki Shimada
|
Kazuki Shimada, Archontis Politis, Parthasaarathy Sudarsanam, Daniel
Krause, Kengo Uchida, Sharath Adavanne, Aapo Hakala, Yuichiro Koyama, Naoya
Takahashi, Shusuke Takahashi, Tuomas Virtanen, Yuki Mitsufuji
|
STARSS23: An Audio-Visual Dataset of Spatial Recordings of Real Scenes
with Spatiotemporal Annotations of Sound Events
|
25 pages, 8 figures
| null | null | null |
cs.SD cs.CV cs.MM eess.AS eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While direction of arrival (DOA) of sound events is generally estimated from
multichannel audio data recorded in a microphone array, sound events usually
derive from visually perceptible source objects, e.g., sounds of footsteps come
from the feet of a walker. This paper proposes an audio-visual sound event
localization and detection (SELD) task, which uses multichannel audio and video
information to estimate the temporal activation and DOA of target sound events.
Audio-visual SELD systems can detect and localize sound events using signals
from a microphone array and audio-visual correspondence. We also introduce an
audio-visual dataset, Sony-TAu Realistic Spatial Soundscapes 2023 (STARSS23),
which consists of multichannel audio data recorded with a microphone array,
video data, and spatiotemporal annotation of sound events. Sound scenes in
STARSS23 are recorded with instructions, which guide recording participants to
ensure adequate activity and occurrences of sound events. STARSS23 also serves
human-annotated temporal activation labels and human-confirmed DOA labels,
which are based on tracking results of a motion capture system. Our benchmark
results show that the audio-visual SELD system achieves lower localization
error than the audio-only system. The data is available at
https://zenodo.org/record/7880637.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 13:37:14 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Shimada",
"Kazuki",
""
],
[
"Politis",
"Archontis",
""
],
[
"Sudarsanam",
"Parthasaarathy",
""
],
[
"Krause",
"Daniel",
""
],
[
"Uchida",
"Kengo",
""
],
[
"Adavanne",
"Sharath",
""
],
[
"Hakala",
"Aapo",
""
],
[
"Koyama",
"Yuichiro",
""
],
[
"Takahashi",
"Naoya",
""
],
[
"Takahashi",
"Shusuke",
""
],
[
"Virtanen",
"Tuomas",
""
],
[
"Mitsufuji",
"Yuki",
""
]
] |
new_dataset
| 0.999776 |
2306.09196
|
Zhili He
|
Zhili He, Wang Chen, Jian Zhang, Yu-Hsing Wang
|
Infrastructure Crack Segmentation: Boundary Guidance Method and
Benchmark Dataset
|
17 pages, 10 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Cracks provide an essential indicator of infrastructure performance
degradation, and achieving high-precision pixel-level crack segmentation is an
issue of concern. Unlike the common research paradigms that adopt novel
artificial intelligence (AI) methods directly, this paper examines the inherent
characteristics of cracks so as to introduce boundary features into crack
identification and then builds a boundary guidance crack segmentation model
(BGCrack) with targeted structures and modules, including a high frequency
module, global information modeling module, joint optimization module, etc.
Extensive experimental results verify the feasibility of the proposed designs
and the effectiveness of the edge information in improving segmentation
results. In addition, considering that notable open-source datasets mainly
consist of asphalt pavement cracks because of ease of access, there is no
standard and widely recognized dataset yet for steel structures, one of the
primary structural forms in civil infrastructure. This paper provides a steel
crack dataset that establishes a unified and fair benchmark for the
identification of steel cracks.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 15:25:53 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"He",
"Zhili",
""
],
[
"Chen",
"Wang",
""
],
[
"Zhang",
"Jian",
""
],
[
"Wang",
"Yu-Hsing",
""
]
] |
new_dataset
| 0.984159 |
2306.09212
|
Haonan Li
|
Haonan Li and Yixuan Zhang and Fajri Koto and Yifei Yang and Hai Zhao
and Yeyun Gong and Nan Duan and Timothy Baldwin
|
CMMLU: Measuring massive multitask language understanding in Chinese
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
As the capabilities of large language models (LLMs) continue to advance,
evaluating their performance becomes increasingly crucial and challenging. This
paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese
benchmark that covers various subjects, including natural science, social
sciences, engineering, and humanities. We conduct a thorough evaluation of 18
advanced multilingual- and Chinese-oriented LLMs, assessing their performance
across different subjects and settings. The results reveal that most existing
LLMs struggle to achieve an average accuracy of 50%, even when provided with
in-context examples and chain-of-thought prompts, whereas the random baseline
stands at 25%. This highlights significant room for improvement in LLMs.
Additionally, we conduct extensive experiments to identify factors impacting
the models' performance and propose directions for enhancing LLMs. CMMLU fills
the gap in evaluating the knowledge and reasoning capabilities of large
language models within the Chinese context.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 15:49:51 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Li",
"Haonan",
""
],
[
"Zhang",
"Yixuan",
""
],
[
"Koto",
"Fajri",
""
],
[
"Yang",
"Yifei",
""
],
[
"Zhao",
"Hai",
""
],
[
"Gong",
"Yeyun",
""
],
[
"Duan",
"Nan",
""
],
[
"Baldwin",
"Timothy",
""
]
] |
new_dataset
| 0.999379 |
2306.09266
|
Christian Cre{\ss}
|
Walter Zimmer, Christian Cre{\ss}, Huu Tung Nguyen, Alois C. Knoll
|
A9 Intersection Dataset: All You Need for Urban 3D Camera-LiDAR Roadside
Perception
|
8 pages, 6 figures, 3 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Intelligent Transportation Systems (ITS) allow a drastic expansion of the
visibility range and decrease occlusions for autonomous driving. To obtain
accurate detections, detailed labeled sensor data for training is required.
Unfortunately, high-quality 3D labels of LiDAR point clouds from the
infrastructure perspective of an intersection are still rare. Therefore, we
provide the A9 Intersection Dataset, which consists of labeled LiDAR point
clouds and synchronized camera images. Here, we recorded the sensor output from
two roadside cameras and LiDARs mounted on intersection gantry bridges. The
point clouds were labeled in 3D by experienced annotators. Furthermore, we
provide calibration data between all sensors, which allow the projection of the
3D labels into the camera images and an accurate data fusion. Our dataset
consists of 4.8k images and point clouds with more than 57.4k manually labeled
3D boxes. With ten object classes, it has a high diversity of road users in
complex driving maneuvers, such as left and right turns, overtaking, and
U-turns. In experiments, we provided multiple baselines for the perception
tasks. Overall, our dataset is a valuable contribution to the scientific
community to perform complex 3D camera-LiDAR roadside perception tasks. Find
data, code, and more information at https://a9-dataset.com.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 16:39:51 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Zimmer",
"Walter",
""
],
[
"Creß",
"Christian",
""
],
[
"Nguyen",
"Huu Tung",
""
],
[
"Knoll",
"Alois C.",
""
]
] |
new_dataset
| 0.999806 |
2306.09267
|
Jeremy Kepner
|
Dimitrios Ioannidis, Jeremy Kepner, Andrew Bowne, Harriet S. Bryant
|
Are ChatGPT and Other Similar Systems the Modern Lernaean Hydras of AI?
|
38 pages, 100+ references, to appear in Fordham Law Review
| null | null | null |
cs.CY cs.AI cs.DL cs.LG cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rise of Generative Artificial Intelligence systems (``AI systems'') has
created unprecedented social engagement. AI code generation systems provide
responses (output) to questions or requests by accessing the vast library of
open-source code created by developers over decades. However, they do so by
allegedly stealing the open-source code stored in virtual libraries, known as
repositories. How all this happens and whether there is a solution short of
years of litigation that can protect innovation is the focus of this article.
We also peripherally touch upon the array of issues raised by the relationship
between AI and copyright. Looking ahead, we propose the following: (a)
immediate changes to the licenses for open-source code created by developers
that will allow access and/or use of any open-source code to humans only; (b)
we suggest revisions to the Massachusetts Institute of Technology (``MIT'')
license so that AI systems procure appropriate licenses from open-source code
developers, which we believe will harmonize standards and build social
consensus for the benefit of all of humanity rather than profit-driven centers
of innovation; (c) We call for urgent legislative action to protect the future
of AI systems while also promoting innovation; and (d) we propose that there is
a shift in the burden of proof to AI systems in obfuscation cases.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 16:40:30 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Ioannidis",
"Dimitrios",
""
],
[
"Kepner",
"Jeremy",
""
],
[
"Bowne",
"Andrew",
""
],
[
"Bryant",
"Harriet S.",
""
]
] |
new_dataset
| 0.970128 |
2306.09274
|
Dar-Yen Chen Mr
|
Dar-Yen Chen
|
Conditional Human Sketch Synthesis with Explicit Abstraction Control
|
Code is available at
https://github.com/ChenDarYen/Conditional-Human-Sketch-Synthesis-with-Explicit-Abstraction-Control
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a novel free-hand sketch synthesis approach addressing
explicit abstraction control in class-conditional and photo-to-sketch
synthesis. Abstraction is a vital aspect of sketches, as it defines the
fundamental distinction between a sketch and an image. Previous works relied on
implicit control to achieve different levels of abstraction, leading to
inaccurate control and synthesized sketches deviating from human sketches. To
resolve this challenge, we propose two novel abstraction control mechanisms,
state embeddings and the stroke token, integrated into a transformer-based
latent diffusion model (LDM). These mechanisms explicitly provide the required
amount of points or strokes to the model, enabling accurate point-level and
stroke-level control in synthesized sketches while preserving recognizability.
Outperforming state-of-the-art approaches, our method effectively generates
diverse, non-rigid and human-like sketches. The proposed approach enables
coherent sketch synthesis and excels in representing human habits with desired
abstraction levels, highlighting the potential of sketch synthesis for
real-world applications.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 16:54:58 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Chen",
"Dar-Yen",
""
]
] |
new_dataset
| 0.986346 |
2306.09286
|
Anh Le
|
Melissa Elkadi, Doekseong Kim, Ejaz Ahmed, Moein Sadeghi, Anh Le, Paul
Russell, Bo Ryu
|
Open Source-based Over-The-Air 5G New Radio Sidelink Testbed
|
8 pages, 11 figures, submitted to MILCOM 2023
| null | null | null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The focus of this paper is the prototype development for 5G new radio (NR)
sidelink communications, which enables NR UEs to transfer data independently
without the assistance of a base station (gNB), designated as sidelink mode 2.
Our design leverages open-source software operating on software-defined radios
(SDRs), which can be easily extended for multiple UE scenarios. The software
includes all signal processing components specified by 5G sidelink standards,
including Low -Density Parity Check (LDPC) encoding/decoding, polar
encoding/decoding, data and control multiplexing, modulation/demodulation, and
orthogonal frequency-division multiplexing (OFDM) modulation/demodulation. It
can be configured to operate with different bands, bandwidths, and multiple
antenna settings. One method to demonstrate the completed Physical Sidelink
Broadcast Channel (PSBCH) development is to show synchronization between a
SyncRef UE and a nearby UE. The SyncRef UE broadcasts a sidelink
synchronization signal block (S-SSB) periodically, which the nearby UE detects
and uses to synchronize its timing and frequency components with the SyncRef
UE. Once a connection is established, the SyncRef UE acts as a transmitter and
shares data with the receiver UE (nearby UE) via the physical sidelink share
channel (PSSCH). Our physical sidelink framework is tested using both an RF
simulator and an over-the-air (OTA) testbed. In this work, we show both
synchronization and data transmission/reception with 5G sidelink mode 2, where
our OTA experimental results align well with our simulation results.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 17:12:05 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Elkadi",
"Melissa",
""
],
[
"Kim",
"Doekseong",
""
],
[
"Ahmed",
"Ejaz",
""
],
[
"Sadeghi",
"Moein",
""
],
[
"Le",
"Anh",
""
],
[
"Russell",
"Paul",
""
],
[
"Ryu",
"Bo",
""
]
] |
new_dataset
| 0.989175 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.