id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2006.10604
|
Margherita Zorzi
|
Davide Trotta and Margherita Zorzi
|
Compositional theories for host-core languages
|
31 pages
| null | null | null |
cs.LO cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Linear type theories, of various types and kinds, are of fundamental
importance in most programming language research nowadays. In this paper we
describe an extension of Benton's Linear-Non-Linear type theory and model for
which we can prove some extra properties that make the system better behaved as
far as its theory is concerned. We call this system the host-core type theory.
The syntax of a host-core language is split into two parts, representing
respectively a host language H and a core language C, embedded in H. This idea,
derived from Benton's Linear-Non-Linear formulation of Linear Logic, allows a
flexible management of data linearity, which is particularly useful in
non-classical computational paradigms. The host-core style can be viewed as a
simplified notion of multi-language programming, the process of software
development in a heterogeneous programming language. In this paper, we present
the typed calculus HC, a minimal and flexible host-core system that captures
and standardizes common properties of an ideal class of host-core languages. We
provide a denotational model in terms of enriched categories and we state a
strong correspondence between syntax and semantics through the notion of
internal language. The latter result provides some useful characterizations of
host-core style, otherwise difficult to obtain. We also discuss some concrete
instances, extensions and specializations of the system HC.
|
[
{
"version": "v1",
"created": "Thu, 18 Jun 2020 15:18:25 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Jun 2020 16:42:36 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Sep 2021 15:21:46 GMT"
},
{
"version": "v4",
"created": "Fri, 5 Aug 2022 13:11:35 GMT"
},
{
"version": "v5",
"created": "Thu, 30 Mar 2023 11:30:12 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Trotta",
"Davide",
""
],
[
"Zorzi",
"Margherita",
""
]
] |
new_dataset
| 0.984693 |
2010.12022
|
Adel Al-Dawood
|
Adel Al-Dawood, Serene Alhajhussein, Svetlana Yarosh
|
Saudi Arabian Parents' Perception of Online Marital Matchmaking
Technologies
|
31 pages, CSCW 2020
|
Proceedings of the ACM on Human Computer Interaction Volume 4
Issue CSCW Article 211 January 2021
|
10.1145/3432910
|
211
|
cs.CY cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Finding a date or a spouse online is usually considered an individualistic
endeavor in Western cultures. This presents a challenge for collectivist
non-Western cultures such as Saudi Arabia where choosing a spouse is viewed as
a union of two families with parents of both spouses being heavily involved.
Our work aims to investigate how Saudi Arabian parents view the utilization of
technology by their young adults to seek potential spouses online. We report
our findings of interviews conducted with 16 Saudi Arabian parents (8 fathers,
6 mothers and 1 couple). We generate qualitative themes that provide insights
about how parents wanted to preserve their values, integrate technology into
the traditional process and protect their young adults from potential harms.
These themes lead to implications for designing suitable marital matchmaking
technologies in Saudi Arabia and opportunities for future work.
|
[
{
"version": "v1",
"created": "Mon, 19 Oct 2020 18:35:22 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Al-Dawood",
"Adel",
""
],
[
"Alhajhussein",
"Serene",
""
],
[
"Yarosh",
"Svetlana",
""
]
] |
new_dataset
| 0.999117 |
2012.03162
|
Christopher Vega
|
Christopher Vega, Shubhra Deb Paul, Patanjali SLPSK, Swarup Bhunia
|
MeLPUF: Memory-in-Logic PUF Structures for Low-Overhead IC
Authentication
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physically Unclonable Functions (PUFs) are used for securing electronic
devices across the implementation spectrum ranging from Field Programmable Gate
Array (FPGA) to system on chips (SoCs). However, existing PUF implementations
often suffer from one or more significant deficiencies: (1) significant design
overhead; (2) difficulty to configure and integrate based on
application-specific requirements; (3) vulnerability to model-building attacks;
and (4) spatial locality to a specific region of a chip. These factors limit
their application in the authentication of designs used in diverse
applications. In this work, we propose MeLPUF: Memory-in-Logic PUF; a
low-overhead, distributed PUF that leverages the existing logic gates in a
design to create cross-coupled inverters (i.e., memory cells) in a logic
circuit as an entropy source. It exploits these memory cells' power-up states
as the entropy source to generate device-specific unique fingerprints. A
dedicated control signal governs these on-demand memory cells. They can be
dispersed across the combinational logic of a design to achieve distributed
authentication. They can also be synthesized with a standard logic synthesis
tool to meet the target area, power, and performance constraints. We evaluate
the quality of MeLPUF signatures with circuit-level simulations and
experimental measurements using FPGA silicon (TSMC 55nm process). Our analysis
shows the high quality of the PUF in terms of uniqueness, randomness, and
robustness while incurring modest overhead. We further demonstrate the
scalability of MeLPUF by aggregating power-up states from multiple memory
cells, thus creating PUF signatures or digital identifiers of varying lengths.
Additionally, we suggest optimization techniques that can be leveraged to boost
the performance of MeLPUF further.
|
[
{
"version": "v1",
"created": "Sun, 6 Dec 2020 02:18:52 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Dec 2020 21:06:27 GMT"
},
{
"version": "v3",
"created": "Wed, 29 Mar 2023 20:10:03 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Vega",
"Christopher",
""
],
[
"Paul",
"Shubhra Deb",
""
],
[
"SLPSK",
"Patanjali",
""
],
[
"Bhunia",
"Swarup",
""
]
] |
new_dataset
| 0.998139 |
2101.09334
|
Ruofan Wu
|
Ruofan Wu, Zhikai Yao, Jennie Si and He (Helen) Huang
|
Robotic Knee Tracking Control to Mimic the Intact Human Knee Profile
Based on Actor-critic Reinforcement Learning
| null | null |
10.1109/JAS.2021.1004272
| null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address a state-of-the-art reinforcement learning (RL) control approach to
automatically configure robotic prosthesis impedance parameters to enable
end-to-end, continuous locomotion intended for transfemoral amputee subjects.
Specifically, our actor-critic based RL provides tracking control of a robotic
knee prosthesis to mimic the intact knee profile. This is a significant advance
from our previous RL based automatic tuning of prosthesis control parameters
which have centered on regulation control with a designer prescribed robotic
knee profile as the target. In addition to presenting the complete tracking
control algorithm based on direct heuristic dynamic programming (dHDP), we
provide an analytical framework for the tracking controller with constrained
inputs. We show that our proposed tracking control possesses several important
properties, such as weight convergence of the learning networks, Bellman
(sub)optimality of the cost-to-go value function and control input, and
practical stability of the human-robot system under input constraint. We
further provide a systematic simulation of the proposed tracking control using
a realistic human-robot system simulator, the OpenSim, to emulate how the dHDP
enables level ground walking, walking on different terrains and at different
paces. These results show that our proposed dHDP based tracking control is not
only theoretically suitable, but also practically useful.
|
[
{
"version": "v1",
"created": "Fri, 22 Jan 2021 21:11:29 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Wu",
"Ruofan",
"",
"Helen"
],
[
"Yao",
"Zhikai",
"",
"Helen"
],
[
"Si",
"Jennie",
"",
"Helen"
],
[
"He",
"",
"",
"Helen"
],
[
"Huang",
"",
""
]
] |
new_dataset
| 0.954927 |
2103.07908
|
Pou-Chun Kung
|
Pou-Chun Kung and Chieh-Chih Wang and Wen-Chieh Lin
|
A Normal Distribution Transform-Based Radar Odometry Designed For
Scanning and Automotive Radars
|
Accepted for publication in ICRA 2021. Code is available: For
scanning RO, see https://github.com/kungfrank/pw_ndt_radar_scan_matching .
For automotive RO, see
https://github.com/kungfrank/pw_ndt_automotive_radar_scan_matching
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing radar sensors can be classified into automotive and scanning radars.
While most radar odometry (RO) methods are only designed for a specific type of
radar, our RO method adapts to both scanning and automotive radars. Our RO is
simple yet effective, where the pipeline consists of thresholding,
probabilistic submap building, and an NDT-based radar scan matching. The
proposed RO has been tested on two public radar datasets: the Oxford Radar
RobotCar dataset and the nuScenes dataset, which provide scanning and
automotive radar data respectively. The results show that our approach
surpasses state-of-the-art RO using either automotive or scanning radar by
reducing translational error by 51% and 30%, respectively, and rotational error
by 17% and 29%, respectively. Besides, we show that our RO achieves
centimeter-level accuracy as lidar odometry, and automotive and scanning RO
have similar accuracy.
|
[
{
"version": "v1",
"created": "Sun, 14 Mar 2021 12:22:32 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Mar 2021 14:56:48 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Mar 2023 05:52:46 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Kung",
"Pou-Chun",
""
],
[
"Wang",
"Chieh-Chih",
""
],
[
"Lin",
"Wen-Chieh",
""
]
] |
new_dataset
| 0.999214 |
2203.09516
|
Yen-Chi Cheng
|
Paritosh Mittal, Yen-Chi Cheng, Maneesh Singh and Shubham Tulsiani
|
AutoSDF: Shape Priors for 3D Completion, Reconstruction and Generation
|
In CVPR 2022. The first two authors contributed equally to this work.
Project: https://yccyenchicheng.github.io/AutoSDF/. Add Supp
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Powerful priors allow us to perform inference with insufficient information.
In this paper, we propose an autoregressive prior for 3D shapes to solve
multimodal 3D tasks such as shape completion, reconstruction, and generation.
We model the distribution over 3D shapes as a non-sequential autoregressive
distribution over a discretized, low-dimensional, symbolic grid-like latent
representation of 3D shapes. This enables us to represent distributions over 3D
shapes conditioned on information from an arbitrary set of spatially anchored
query locations and thus perform shape completion in such arbitrary settings
(e.g., generating a complete chair given only a view of the back leg). We also
show that the learned autoregressive prior can be leveraged for conditional
tasks such as single-view reconstruction and language-based generation. This is
achieved by learning task-specific naive conditionals which can be approximated
by light-weight models trained on minimal paired data. We validate the
effectiveness of the proposed method using both quantitative and qualitative
evaluation and show that the proposed method outperforms the specialized
state-of-the-art methods trained for individual tasks. The project page with
code and video visualizations can be found at
https://yccyenchicheng.github.io/AutoSDF/.
|
[
{
"version": "v1",
"created": "Thu, 17 Mar 2022 17:59:54 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Apr 2022 20:57:07 GMT"
},
{
"version": "v3",
"created": "Wed, 29 Mar 2023 21:33:16 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Mittal",
"Paritosh",
""
],
[
"Cheng",
"Yen-Chi",
""
],
[
"Singh",
"Maneesh",
""
],
[
"Tulsiani",
"Shubham",
""
]
] |
new_dataset
| 0.991929 |
2204.02855
|
Haoling Zhang
|
Haoling Zhang, Zhaojun Lan, Wenwei Zhang, Xun Xu, Zhi Ping, Yiwei
Zhang, Yue Shen
|
SPIDER-WEB generates coding algorithms with superior error tolerance and
real-time information retrieval capacity
|
47 pages; 13 figures; 8 tables
| null | null | null |
cs.ET cs.IT math.CO math.IT q-bio.GN
|
http://creativecommons.org/licenses/by/4.0/
|
DNA has been considered a promising medium for storing digital information.
As an essential step in the DNA-based data storage workflow, coding algorithms
are responsible to implement functions including bit-to-base transcoding, error
correction, etc. In previous studies, these functions are normally realized by
introducing multiple algorithms. Here, we report a graph-based architecture,
named SPIDER-WEB, providing an all-in-one coding solution by generating
customized algorithms automatically. SPIDERWEB is able to correct a maximum of
4% edit errors in the DNA sequences including substitution and
insertion/deletion (indel), with only 5.5% redundant symbols. Since no DNA
sequence pretreatment is required for the correcting and decoding processes,
SPIDER-WEB offers the function of real-time information retrieval, which is
305.08 times faster than the speed of single-molecule sequencing techniques.
Our retrieval process can improve 2 orders of magnitude faster compared to the
conventional one under megabyte-level data and can be scalable to fit
exabyte-level data. Therefore, SPIDER-WEB holds the potential to improve the
practicability in large-scale data storage applications.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 14:22:26 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Apr 2022 14:22:02 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Mar 2023 11:51:44 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Zhang",
"Haoling",
""
],
[
"Lan",
"Zhaojun",
""
],
[
"Zhang",
"Wenwei",
""
],
[
"Xu",
"Xun",
""
],
[
"Ping",
"Zhi",
""
],
[
"Zhang",
"Yiwei",
""
],
[
"Shen",
"Yue",
""
]
] |
new_dataset
| 0.975627 |
2205.10655
|
Alankar Kotwal
|
Alankar Kotwal and Anat Levin and Ioannis Gkioulekas
|
Swept-Angle Synthetic Wavelength Interferometry
| null | null | null | null |
cs.CV physics.optics
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new imaging technique, swept-angle synthetic wavelength
interferometry, for full-field micron-scale 3D sensing. As in conventional
synthetic wavelength interferometry, our technique uses light consisting of two
narrowly-separated optical wavelengths, resulting in per-pixel interferometric
measurements whose phase encodes scene depth. Our technique additionally uses a
new type of light source that, by emulating spatially-incoherent illumination,
makes interferometric measurements insensitive to aberrations and (sub)surface
scattering, effects that corrupt phase measurements. The resulting technique
combines the robustness to such corruptions of scanning interferometric setups,
with the speed of full-field interferometric setups. Overall, our technique can
recover full-frame depth at a lateral and axial resolution of 5 microns, at
frame rates of 5 Hz, even under strong ambient light. We build an experimental
prototype, and use it to demonstrate these capabilities by scanning a variety
of objects, including objects representative of applications in inspection and
fabrication, and objects that contain challenging light scattering effects.
|
[
{
"version": "v1",
"created": "Sat, 21 May 2022 18:38:05 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Nov 2022 15:57:15 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Mar 2023 07:02:03 GMT"
},
{
"version": "v4",
"created": "Wed, 29 Mar 2023 19:06:36 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Kotwal",
"Alankar",
""
],
[
"Levin",
"Anat",
""
],
[
"Gkioulekas",
"Ioannis",
""
]
] |
new_dataset
| 0.994086 |
2205.13115
|
Jaemin Cho
|
Jaemin Cho, Seunghyun Yoon, Ajinkya Kale, Franck Dernoncourt, Trung
Bui, Mohit Bansal
|
Fine-grained Image Captioning with CLIP Reward
|
NAACL Findings 2022
| null | null | null |
cs.CL cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern image captioning models are usually trained with text similarity
objectives. However, since reference captions in public datasets often describe
the most salient common objects, models trained with text similarity objectives
tend to ignore specific and detailed aspects of an image that distinguish it
from others. Toward more descriptive and distinctive caption generation, we
propose using CLIP, a multimodal encoder trained on huge image-text pairs from
web, to calculate multimodal similarity and use it as a reward function. We
also propose a simple finetuning strategy of the CLIP text encoder to improve
grammar that does not require extra text annotation. This completely eliminates
the need for reference captions during the reward computation. To
comprehensively evaluate descriptive captions, we introduce FineCapEval, a new
dataset for caption evaluation with fine-grained criteria: overall, background,
object, relations. In our experiments on text-to-image retrieval and
FineCapEval, the proposed CLIP-guided model generates more distinctive captions
than the CIDEr-optimized model. We also show that our unsupervised grammar
finetuning of the CLIP text encoder alleviates the degeneration problem of the
naive CLIP reward. Lastly, we show human analysis where the annotators strongly
prefer the CLIP reward to the CIDEr and MLE objectives according to various
criteria. Code and Data: https://github.com/j-min/CLIP-Caption-Reward
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 02:46:09 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Mar 2023 18:26:34 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Cho",
"Jaemin",
""
],
[
"Yoon",
"Seunghyun",
""
],
[
"Kale",
"Ajinkya",
""
],
[
"Dernoncourt",
"Franck",
""
],
[
"Bui",
"Trung",
""
],
[
"Bansal",
"Mohit",
""
]
] |
new_dataset
| 0.999686 |
2206.07796
|
MD. Mahim Anjum Haque
|
Md Mahim Anjum Haque and Wasi Uddin Ahmad and Ismini Lourentzou and
Chris Brown
|
FixEval: Execution-based Evaluation of Program Fixes for Programming
Problems
| null | null | null | null |
cs.SE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The complexity of modern software has led to a drastic increase in the time
and cost associated with detecting and rectifying software bugs. In response,
researchers have explored various methods to automatically generate fixes for
buggy code. However, due to the large combinatorial space of possible fixes for
any given bug, few tools and datasets are available to evaluate model-generated
fixes effectively. To address this issue, we introduce FixEval, a benchmark
comprising of buggy code submissions to competitive programming problems and
their corresponding fixes. FixEval offers an extensive collection of unit tests
to evaluate the correctness of model-generated program fixes and assess further
information regarding time, memory constraints, and acceptance based on a
verdict. We consider two Transformer language models pretrained on programming
languages as our baseline and compare them using match-based and
execution-based evaluation metrics. Our experiments show that match-based
metrics do not reflect model-generated program fixes accurately. At the same
time, execution-based methods evaluate programs through all cases and scenarios
designed explicitly for that solution. Therefore, we believe FixEval provides a
step towards real-world automatic bug fixing and model-generated code
evaluation. The dataset and models are open-sourced at
https://github.com/mahimanzum/FixEval.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 20:18:43 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Aug 2022 04:27:34 GMT"
},
{
"version": "v3",
"created": "Thu, 29 Sep 2022 21:10:13 GMT"
},
{
"version": "v4",
"created": "Thu, 30 Mar 2023 14:30:46 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Haque",
"Md Mahim Anjum",
""
],
[
"Ahmad",
"Wasi Uddin",
""
],
[
"Lourentzou",
"Ismini",
""
],
[
"Brown",
"Chris",
""
]
] |
new_dataset
| 0.974184 |
2209.02970
|
Junjie Wang
|
Jiaxing Zhang, Ruyi Gan, Junjie Wang, Yuxiang Zhang, Lin Zhang, Ping
Yang, Xinyu Gao, Ziwei Wu, Xiaoqun Dong, Junqing He, Jianheng Zhuo, Qi Yang,
Yongfeng Huang, Xiayu Li, Yanghan Wu, Junyu Lu, Xinyu Zhu, Weifeng Chen, Ting
Han, Kunhao Pan, Rui Wang, Hao Wang, Xiaojun Wu, Zhongshen Zeng, Chongpei
Chen
|
Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence
|
Added the Chinese version and is now a bilingual paper
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, foundation models become one of fundamental infrastructures in
artificial intelligence, paving ways to the general intelligence. However, the
reality presents two urgent challenges: existing foundation models are
dominated by the English-language community; users are often given limited
resources and thus cannot always use foundation models. To support the
development of the Chinese-language community, we introduce an open-source
project, called Fengshenbang, which leads by the research center for Cognitive
Computing and Natural Language (CCNL). Our project has comprehensive
capabilities, including large pre-trained models, user-friendly APIs,
benchmarks, datasets, and others. We wrap all these in three sub-projects: the
Fengshenbang Model, the Fengshen Framework, and the Fengshen Benchmark. An
open-source roadmap, Fengshenbang, aims to re-evaluate the open-source
community of Chinese pre-trained large-scale models, prompting the development
of the entire Chinese large-scale model community. We also want to build a
user-centered open-source ecosystem to allow individuals to access the desired
models to match their computing resources. Furthermore, we invite companies,
colleges, and research institutions to collaborate with us to build the
large-scale open-source model-based ecosystem. We hope that this project will
be the foundation of Chinese cognitive intelligence.
|
[
{
"version": "v1",
"created": "Wed, 7 Sep 2022 07:32:37 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Oct 2022 07:57:41 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Mar 2023 14:22:55 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Zhang",
"Jiaxing",
""
],
[
"Gan",
"Ruyi",
""
],
[
"Wang",
"Junjie",
""
],
[
"Zhang",
"Yuxiang",
""
],
[
"Zhang",
"Lin",
""
],
[
"Yang",
"Ping",
""
],
[
"Gao",
"Xinyu",
""
],
[
"Wu",
"Ziwei",
""
],
[
"Dong",
"Xiaoqun",
""
],
[
"He",
"Junqing",
""
],
[
"Zhuo",
"Jianheng",
""
],
[
"Yang",
"Qi",
""
],
[
"Huang",
"Yongfeng",
""
],
[
"Li",
"Xiayu",
""
],
[
"Wu",
"Yanghan",
""
],
[
"Lu",
"Junyu",
""
],
[
"Zhu",
"Xinyu",
""
],
[
"Chen",
"Weifeng",
""
],
[
"Han",
"Ting",
""
],
[
"Pan",
"Kunhao",
""
],
[
"Wang",
"Rui",
""
],
[
"Wang",
"Hao",
""
],
[
"Wu",
"Xiaojun",
""
],
[
"Zeng",
"Zhongshen",
""
],
[
"Chen",
"Chongpei",
""
]
] |
new_dataset
| 0.998985 |
2210.15511
|
Zhi-Qi Cheng
|
Jin-Peng Lan, Zhi-Qi Cheng, Jun-Yan He, Chenyang Li, Bin Luo, Xu Bao,
Wangmeng Xiang, Yifeng Geng, Xuansong Xie
|
ProContEXT: Exploring Progressive Context Transformer for Tracking
|
Accepted at ICASSP 2023, source code is at
https://github.com/zhiqic/ProContEXT
| null | null | null |
cs.CV cs.AI cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing Visual Object Tracking (VOT) only takes the target area in the first
frame as a template. This causes tracking to inevitably fail in fast-changing
and crowded scenes, as it cannot account for changes in object appearance
between frames. To this end, we revamped the tracking framework with
Progressive Context Encoding Transformer Tracker (ProContEXT), which coherently
exploits spatial and temporal contexts to predict object motion trajectories.
Specifically, ProContEXT leverages a context-aware self-attention module to
encode the spatial and temporal context, refining and updating the multi-scale
static and dynamic templates to progressively perform accurately tracking. It
explores the complementary between spatial and temporal context, raising a new
pathway to multi-context modeling for transformer-based trackers. In addition,
ProContEXT revised the token pruning technique to reduce computational
complexity. Extensive experiments on popular benchmark datasets such as GOT-10k
and TrackingNet demonstrate that the proposed ProContEXT achieves
state-of-the-art performance.
|
[
{
"version": "v1",
"created": "Thu, 27 Oct 2022 14:47:19 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Nov 2022 21:45:32 GMT"
},
{
"version": "v3",
"created": "Mon, 27 Mar 2023 02:02:43 GMT"
},
{
"version": "v4",
"created": "Thu, 30 Mar 2023 06:12:26 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Lan",
"Jin-Peng",
""
],
[
"Cheng",
"Zhi-Qi",
""
],
[
"He",
"Jun-Yan",
""
],
[
"Li",
"Chenyang",
""
],
[
"Luo",
"Bin",
""
],
[
"Bao",
"Xu",
""
],
[
"Xiang",
"Wangmeng",
""
],
[
"Geng",
"Yifeng",
""
],
[
"Xie",
"Xuansong",
""
]
] |
new_dataset
| 0.995507 |
2211.10598
|
Chuanfu Shen
|
Chuanfu Shen, Chao Fan, Wei Wu, Rui Wang, George Q. Huang, Shiqi Yu
|
LidarGait: Benchmarking 3D Gait Recognition with Point Clouds
|
15 pages, 15 figures, 4 tables
|
published on CVPR2023
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video-based gait recognition has achieved impressive results in constrained
scenarios. However, visual cameras neglect human 3D structure information,
which limits the feasibility of gait recognition in the 3D wild world. Instead
of extracting gait features from images, this work explores precise 3D gait
features from point clouds and proposes a simple yet efficient 3D gait
recognition framework, termed LidarGait. Our proposed approach projects sparse
point clouds into depth maps to learn the representations with 3D geometry
information, which outperforms existing point-wise and camera-based methods by
a significant margin. Due to the lack of point cloud datasets, we built the
first large-scale LiDAR-based gait recognition dataset, SUSTech1K, collected by
a LiDAR sensor and an RGB camera. The dataset contains 25,239 sequences from
1,050 subjects and covers many variations, including visibility, views,
occlusions, clothing, carrying, and scenes. Extensive experiments show that (1)
3D structure information serves as a significant feature for gait recognition.
(2) LidarGait outperforms existing point-based and silhouette-based methods by
a significant margin, while it also offers stable cross-view results. (3) The
LiDAR sensor is superior to the RGB camera for gait recognition in the outdoor
environment. The source code and dataset have been made available at
https://lidargait.github.io.
|
[
{
"version": "v1",
"created": "Sat, 19 Nov 2022 06:23:08 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Mar 2023 07:51:03 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Shen",
"Chuanfu",
""
],
[
"Fan",
"Chao",
""
],
[
"Wu",
"Wei",
""
],
[
"Wang",
"Rui",
""
],
[
"Huang",
"George Q.",
""
],
[
"Yu",
"Shiqi",
""
]
] |
new_dataset
| 0.993728 |
2211.13218
|
James Smith
|
James Seale Smith, Leonid Karlinsky, Vyshnavi Gutta, Paola
Cascante-Bonilla, Donghyun Kim, Assaf Arbelle, Rameswar Panda, Rogerio Feris,
Zsolt Kira
|
CODA-Prompt: COntinual Decomposed Attention-based Prompting for
Rehearsal-Free Continual Learning
|
Accepted by the 2023 IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR 2023)
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computer vision models suffer from a phenomenon known as catastrophic
forgetting when learning novel concepts from continuously shifting training
data. Typical solutions for this continual learning problem require extensive
rehearsal of previously seen data, which increases memory costs and may violate
data privacy. Recently, the emergence of large-scale pre-trained vision
transformer models has enabled prompting approaches as an alternative to
data-rehearsal. These approaches rely on a key-query mechanism to generate
prompts and have been found to be highly resistant to catastrophic forgetting
in the well-established rehearsal-free continual learning setting. However, the
key mechanism of these methods is not trained end-to-end with the task
sequence. Our experiments show that this leads to a reduction in their
plasticity, hence sacrificing new task accuracy, and inability to benefit from
expanded parameter capacity. We instead propose to learn a set of prompt
components which are assembled with input-conditioned weights to produce
input-conditioned prompts, resulting in a novel attention-based end-to-end
key-query scheme. Our experiments show that we outperform the current SOTA
method DualPrompt on established benchmarks by as much as 4.5% in average final
accuracy. We also outperform the state of art by as much as 4.4% accuracy on a
continual learning benchmark which contains both class-incremental and
domain-incremental task shifts, corresponding to many practical settings. Our
code is available at https://github.com/GT-RIPL/CODA-Prompt
|
[
{
"version": "v1",
"created": "Wed, 23 Nov 2022 18:57:11 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Mar 2023 17:58:59 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Smith",
"James Seale",
""
],
[
"Karlinsky",
"Leonid",
""
],
[
"Gutta",
"Vyshnavi",
""
],
[
"Cascante-Bonilla",
"Paola",
""
],
[
"Kim",
"Donghyun",
""
],
[
"Arbelle",
"Assaf",
""
],
[
"Panda",
"Rameswar",
""
],
[
"Feris",
"Rogerio",
""
],
[
"Kira",
"Zsolt",
""
]
] |
new_dataset
| 0.999251 |
2212.13738
|
Jiawei Ma
|
Yuncong Yang, Jiawei Ma, Shiyuan Huang, Long Chen, Xudong Lin,
Guangxing Han, Shih-Fu Chang
|
TempCLR: Temporal Alignment Representation with Contrastive Learning
|
ICLR 2023 Camera Ready. Code Link:
https://github.com/yyuncong/TempCLR
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video representation learning has been successful in video-text pre-training
for zero-shot transfer, where each sentence is trained to be close to the
paired video clips in a common feature space. For long videos, given a
paragraph of description where the sentences describe different segments of the
video, by matching all sentence-clip pairs, the paragraph and the full video
are aligned implicitly. However, such unit-level comparison may ignore global
temporal context, which inevitably limits the generalization ability. In this
paper, we propose a contrastive learning framework TempCLR to compare the full
video and the paragraph explicitly. As the video/paragraph is formulated as a
sequence of clips/sentences, under the constraint of their temporal order, we
use dynamic time warping to compute the minimum cumulative cost over
sentence-clip pairs as the sequence-level distance. To explore the temporal
dynamics, we break the consistency of temporal succession by shuffling video
clips w.r.t. temporal granularity. Then, we obtain the representations for
clips/sentences, which perceive the temporal information and thus facilitate
the sequence alignment. In addition to pre-training on the video and paragraph,
our approach can also generalize on the matching between video instances. We
evaluate our approach on video retrieval, action step localization, and
few-shot action recognition, and achieve consistent performance gain over all
three tasks. Detailed ablation studies are provided to justify the approach
design.
|
[
{
"version": "v1",
"created": "Wed, 28 Dec 2022 08:10:31 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Mar 2023 01:42:53 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Yang",
"Yuncong",
""
],
[
"Ma",
"Jiawei",
""
],
[
"Huang",
"Shiyuan",
""
],
[
"Chen",
"Long",
""
],
[
"Lin",
"Xudong",
""
],
[
"Han",
"Guangxing",
""
],
[
"Chang",
"Shih-Fu",
""
]
] |
new_dataset
| 0.992063 |
2301.08269
|
Ruozhou Yu
|
Huayue Gu, Zhouyu Li, Ruozhou Yu, Xiaojian Wang, Fangtong Zhou,
Jianqing Liu
|
FENDI: High-Fidelity Entanglement Distribution in the Quantum Internet
| null | null | null | null |
cs.NI quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A quantum network distributes quantum entanglements between remote nodes,
which is key to many quantum applications. However, unavoidable noise in
quantum operations could lead to both low throughput and low quality of
entanglement distribution. This paper aims to address the simultaneous
exponential degradation in throughput and quality in a buffered multi-hop
quantum network. Based on an end-to-end fidelity model with worst-case
(isotropic) noise, we formulate the high-fidelity remote entanglement
distribution problem for a single source-destination pair, and prove its
NP-hardness. To address the problem, we develop a fully polynomial-time
approximation scheme for the control plane of the quantum network, and a
distributed data plane protocol that achieves the desired long-term throughput
and worst-case fidelity based on control plane outputs. To evaluate our
algorithm and protocol, we develop a discrete-time quantum network simulator.
Simulation results show the superior performance of our approach compared to
existing fidelity-agnostic and fidelity-aware solutions.
|
[
{
"version": "v1",
"created": "Thu, 19 Jan 2023 19:05:02 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Mar 2023 02:06:45 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Gu",
"Huayue",
""
],
[
"Li",
"Zhouyu",
""
],
[
"Yu",
"Ruozhou",
""
],
[
"Wang",
"Xiaojian",
""
],
[
"Zhou",
"Fangtong",
""
],
[
"Liu",
"Jianqing",
""
]
] |
new_dataset
| 0.99709 |
2301.12700
|
Xintao Chu
|
Xintao Chu, Jianping Liu, Jian Wang, Xiaofeng Wang, Yingfei Wang, Meng
Wang, Xunxun Gu
|
CSDR-BERT: a pre-trained scientific dataset match model for Chinese
Scientific Dataset Retrieval
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the number of open and shared scientific datasets on the Internet
increases under the open science movement, efficiently retrieving these
datasets is a crucial task in information retrieval (IR) research. In recent
years, the development of large models, particularly the pre-training and
fine-tuning paradigm, which involves pre-training on large models and
fine-tuning on downstream tasks, has provided new solutions for IR match tasks.
In this study, we use the original BERT token in the embedding layer, improve
the Sentence-BERT model structure in the model layer by introducing the SimCSE
and K-Nearest Neighbors method, and use the cosent loss function in the
optimization phase to optimize the target output. Our experimental results show
that our model outperforms other competing models on both public and self-built
datasets through comparative experiments and ablation implementations. This
study explores and validates the feasibility and efficiency of pre-training
techniques for semantic retrieval of Chinese scientific datasets.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 07:12:38 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Jan 2023 04:56:10 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Mar 2023 09:18:52 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Chu",
"Xintao",
""
],
[
"Liu",
"Jianping",
""
],
[
"Wang",
"Jian",
""
],
[
"Wang",
"Xiaofeng",
""
],
[
"Wang",
"Yingfei",
""
],
[
"Wang",
"Meng",
""
],
[
"Gu",
"Xunxun",
""
]
] |
new_dataset
| 0.991331 |
2302.10574
|
Weiqin Zhao
|
Weiqin Zhao, Shujun Wang, Maximus Yeung, Tianye Niu, Lequan Yu
|
MulGT: Multi-task Graph-Transformer with Task-aware Knowledge Injection
and Domain Knowledge-driven Pooling for Whole Slide Image Analysis
|
AAAI 2023
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Whole slide image (WSI) has been widely used to assist automated diagnosis
under the deep learning fields. However, most previous works only discuss the
SINGLE task setting which is not aligned with real clinical setting, where
pathologists often conduct multiple diagnosis tasks simultaneously. Also, it is
commonly recognized that the multi-task learning paradigm can improve learning
efficiency by exploiting commonalities and differences across multiple tasks.
To this end, we present a novel multi-task framework (i.e., MulGT) for WSI
analysis by the specially designed Graph-Transformer equipped with Task-aware
Knowledge Injection and Domain Knowledge-driven Graph Pooling modules.
Basically, with the Graph Neural Network and Transformer as the building
commons, our framework is able to learn task-agnostic low-level local
information as well as task-specific high-level global representation.
Considering that different tasks in WSI analysis depend on different features
and properties, we also design a novel Task-aware Knowledge Injection module to
transfer the task-shared graph embedding into task-specific feature spaces to
learn more accurate representation for different tasks. Further, we elaborately
design a novel Domain Knowledge-driven Graph Pooling module for each task to
improve both the accuracy and robustness of different tasks by leveraging
different diagnosis patterns of multiple tasks. We evaluated our method on two
public WSI datasets from TCGA projects, i.e., esophageal carcinoma and kidney
carcinoma. Experimental results show that our method outperforms single-task
counterparts and the state-of-theart methods on both tumor typing and staging
tasks.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 10:00:58 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Mar 2023 14:10:33 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Mar 2023 08:51:05 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Zhao",
"Weiqin",
""
],
[
"Wang",
"Shujun",
""
],
[
"Yeung",
"Maximus",
""
],
[
"Niu",
"Tianye",
""
],
[
"Yu",
"Lequan",
""
]
] |
new_dataset
| 0.950853 |
2302.11883
|
Xinling Yu
|
Xinling Yu, Jos\'e E. C. Serrall\'es, Ilias I. Giannakopoulos, Ziyue
Liu, Luca Daniel, Riccardo Lattanzi, Zheng Zhang
|
PIFON-EPT: MR-Based Electrical Property Tomography Using
Physics-Informed Fourier Networks
|
10 pages, submitted to IEEE TBME
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
\textit{Objective:} In this paper, we introduce Physics-Informed Fourier
Networks for Electrical Properties Tomography (PIFON-EPT), a novel deep
learning-based method that solves an inverse scattering problem based on noisy
and/or incomplete magnetic resonance (MR) measurements. \textit{Methods:} We
used two separate fully-connected neural networks, namely $B_1^{+}$ Net and EP
Net, to solve the Helmholtz equation in order to learn a de-noised version of
the input $B_1^{+}$ maps and estimate the object's EP. A random Fourier
features mapping was embedded into $B_1^{+}$ Net, to learn the high-frequency
details of $B_1^{+}$ more efficiently. The two neural networks were trained
jointly by minimizing the combination of a physics-informed loss and a data
mismatch loss via gradient descent. \textit{Results:} We performed several
numerical experiments, showing that PIFON-EPT could provide physically
consistent reconstructions of the EP and transmit field. Even when only $50\%$
of the noisy MR measurements were used as inputs, our method could still
reconstruct the EP and transmit field with average error $2.49\%$, $4.09\%$ and
$0.32\%$ for the relative permittivity, conductivity and $B_{1}^{+}$,
respectively, over the entire volume of the phantom. The generalized version of
PIFON-EPT that accounts for gradients of EP yielded accurate results at the
interface between regions of different EP values without requiring any boundary
conditions. \textit{Conclusion:} This work demonstrated the feasibility of
PIFON-EPT, suggesting it could be an accurate and effective method for EP
estimation. \textit{Significance:} PIFON-EPT can efficiently de-noise $B_1^{+}$
maps, which has the potential to improve other MR-based EPT techniques.
Furthermore, PIFON-EPT is the first technique that can reconstruct EP and
$B_{1}^{+}$ simultaneously from incomplete noisy MR measurements.
|
[
{
"version": "v1",
"created": "Thu, 23 Feb 2023 09:42:21 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Feb 2023 09:54:40 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Mar 2023 07:09:35 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Yu",
"Xinling",
""
],
[
"Serrallés",
"José E. C.",
""
],
[
"Giannakopoulos",
"Ilias I.",
""
],
[
"Liu",
"Ziyue",
""
],
[
"Daniel",
"Luca",
""
],
[
"Lattanzi",
"Riccardo",
""
],
[
"Zhang",
"Zheng",
""
]
] |
new_dataset
| 0.994465 |
2303.07337
|
Qihao Liu
|
Qihao Liu, Adam Kortylewski, Alan Yuille
|
PoseExaminer: Automated Testing of Out-of-Distribution Robustness in
Human Pose and Shape Estimation
|
Accepted to CVPR 2023; Code: https://github.com/qihao067/PoseExaminer
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human pose and shape (HPS) estimation methods achieve remarkable results.
However, current HPS benchmarks are mostly designed to test models in scenarios
that are similar to the training data. This can lead to critical situations in
real-world applications when the observed data differs significantly from the
training data and hence is out-of-distribution (OOD). It is therefore important
to test and improve the OOD robustness of HPS methods. To address this
fundamental problem, we develop a simulator that can be controlled in a
fine-grained manner using interpretable parameters to explore the manifold of
images of human pose, e.g. by varying poses, shapes, and clothes. We introduce
a learning-based testing method, termed PoseExaminer, that automatically
diagnoses HPS algorithms by searching over the parameter space of human pose
images to find the failure modes. Our strategy for exploring this
high-dimensional parameter space is a multi-agent reinforcement learning
system, in which the agents collaborate to explore different parts of the
parameter space. We show that our PoseExaminer discovers a variety of
limitations in current state-of-the-art models that are relevant in real-world
scenarios but are missed by current benchmarks. For example, it finds large
regions of realistic human poses that are not predicted correctly, as well as
reduced performance for humans with skinny and corpulent body shapes. In
addition, we show that fine-tuning HPS methods by exploiting the failure modes
found by PoseExaminer improve their robustness and even their performance on
standard benchmarks by a significant margin. The code are available for
research purposes.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 17:58:54 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Mar 2023 04:34:04 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Liu",
"Qihao",
""
],
[
"Kortylewski",
"Adam",
""
],
[
"Yuille",
"Alan",
""
]
] |
new_dataset
| 0.962747 |
2303.07489
|
Junjie Ke
|
Junjie Ke, Tianhao Zhang, Yilin Wang, Peyman Milanfar, Feng Yang
|
MRET: Multi-resolution Transformer for Video Quality Assessment
|
Frontiers Signal Processing in Computational Video and Video
Streaming
(https://www.frontiersin.org/articles/10.3389/frsip.2023.1137006/full)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
No-reference video quality assessment (NR-VQA) for user generated content
(UGC) is crucial for understanding and improving visual experience. Unlike
video recognition tasks, VQA tasks are sensitive to changes in input
resolution. Since large amounts of UGC videos nowadays are 720p or above, the
fixed and relatively small input used in conventional NR-VQA methods results in
missing high-frequency details for many videos. In this paper, we propose a
novel Transformer-based NR-VQA framework that preserves the high-resolution
quality information. With the multi-resolution input representation and a novel
multi-resolution patch sampling mechanism, our method enables a comprehensive
view of both the global video composition and local high-resolution details.
The proposed approach can effectively aggregate quality information across
different granularities in spatial and temporal dimensions, making the model
robust to input resolution variations. Our method achieves state-of-the-art
performance on large-scale UGC VQA datasets LSVQ and LSVQ-1080p, and on
KoNViD-1k and LIVE-VQC without fine-tuning.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 21:48:49 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Mar 2023 18:23:54 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Ke",
"Junjie",
""
],
[
"Zhang",
"Tianhao",
""
],
[
"Wang",
"Yilin",
""
],
[
"Milanfar",
"Peyman",
""
],
[
"Yang",
"Feng",
""
]
] |
new_dataset
| 0.999184 |
2303.08132
|
Qihao Liu
|
Qihao Liu, Junfeng Wu, Yi Jiang, Xiang Bai, Alan Yuille, Song Bai
|
InstMove: Instance Motion for Object-centric Video Segmentation
|
Accepted to CVPR 2023; Code: https://github.com/wjf5203/VNext
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite significant efforts, cutting-edge video segmentation methods still
remain sensitive to occlusion and rapid movement, due to their reliance on the
appearance of objects in the form of object embeddings, which are vulnerable to
these disturbances. A common solution is to use optical flow to provide motion
information, but essentially it only considers pixel-level motion, which still
relies on appearance similarity and hence is often inaccurate under occlusion
and fast movement. In this work, we study the instance-level motion and present
InstMove, which stands for Instance Motion for Object-centric Video
Segmentation. In comparison to pixel-wise motion, InstMove mainly relies on
instance-level motion information that is free from image feature embeddings,
and features physical interpretations, making it more accurate and robust
toward occlusion and fast-moving objects. To better fit in with the video
segmentation tasks, InstMove uses instance masks to model the physical presence
of an object and learns the dynamic model through a memory network to predict
its position and shape in the next frame. With only a few lines of code,
InstMove can be integrated into current SOTA methods for three different video
segmentation tasks and boost their performance. Specifically, we improve the
previous arts by 1.5 AP on OVIS dataset, which features heavy occlusions, and
4.9 AP on YouTubeVIS-Long dataset, which mainly contains fast-moving objects.
These results suggest that instance-level motion is robust and accurate, and
hence serving as a powerful solution in complex scenarios for object-centric
video segmentation.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 17:58:44 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Mar 2023 04:23:32 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Liu",
"Qihao",
""
],
[
"Wu",
"Junfeng",
""
],
[
"Jiang",
"Yi",
""
],
[
"Bai",
"Xiang",
""
],
[
"Yuille",
"Alan",
""
],
[
"Bai",
"Song",
""
]
] |
new_dataset
| 0.995268 |
2303.11502
|
Subhadeep Koley
|
Ayan Kumar Bhunia, Subhadeep Koley, Amandeep Kumar, Aneeshan Sain,
Pinaki Nath Chowdhury, Tao Xiang, Yi-Zhe Song
|
Sketch2Saliency: Learning to Detect Salient Objects from Human Drawings
|
CVPR 2023. Project page available at
https://ayankumarbhunia.github.io/Sketch2Saliency/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Human sketch has already proved its worth in various visual understanding
tasks (e.g., retrieval, segmentation, image-captioning, etc). In this paper, we
reveal a new trait of sketches - that they are also salient. This is intuitive
as sketching is a natural attentive process at its core. More specifically, we
aim to study how sketches can be used as a weak label to detect salient objects
present in an image. To this end, we propose a novel method that emphasises on
how "salient object" could be explained by hand-drawn sketches. To accomplish
this, we introduce a photo-to-sketch generation model that aims to generate
sequential sketch coordinates corresponding to a given visual photo through a
2D attention mechanism. Attention maps accumulated across the time steps give
rise to salient regions in the process. Extensive quantitative and qualitative
experiments prove our hypothesis and delineate how our sketch-based saliency
detection model gives a competitive performance compared to the
state-of-the-art.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 23:46:46 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Mar 2023 22:14:11 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Mar 2023 15:08:36 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Bhunia",
"Ayan Kumar",
""
],
[
"Koley",
"Subhadeep",
""
],
[
"Kumar",
"Amandeep",
""
],
[
"Sain",
"Aneeshan",
""
],
[
"Chowdhury",
"Pinaki Nath",
""
],
[
"Xiang",
"Tao",
""
],
[
"Song",
"Yi-Zhe",
""
]
] |
new_dataset
| 0.993949 |
2303.13959
|
Bohan Li
|
Bohan Li, Yasheng Sun, Xin Jin, Wenjun Zeng, Zheng Zhu, Xiaoefeng
Wang, Yunpeng Zhang, James Okae, Hang Xiao, Dalong Du
|
StereoScene: BEV-Assisted Stereo Matching Empowers 3D Semantic Scene
Completion
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
3D semantic scene completion (SSC) is an ill-posed task that requires
inferring a dense 3D scene from incomplete observations. Previous methods
either explicitly incorporate 3D geometric input or rely on learnt 3D prior
behind monocular RGB images. However, 3D sensors such as LiDAR are expensive
and intrusive while monocular cameras face challenges in modeling precise
geometry due to the inherent ambiguity. In this work, we propose StereoScene
for 3D Semantic Scene Completion (SSC), which explores taking full advantage of
light-weight camera inputs without resorting to any external 3D sensors. Our
key insight is to leverage stereo matching to resolve geometric ambiguity. To
improve its robustness in unmatched areas, we introduce bird's-eye-view (BEV)
representation to inspire hallucination ability with rich context information.
On top of the stereo and BEV representations, a mutual interactive aggregation
(MIA) module is carefully devised to fully unleash their power. Specifically, a
Bi-directional Interaction Transformer (BIT) augmented with confidence
re-weighting is used to encourage reliable prediction through mutual guidance
while a Dual Volume Aggregation (DVA) module is designed to facilitate
complementary aggregation. Experimental results on SemanticKITTI demonstrate
that the proposed StereoScene outperforms the state-of-the-art camera-based
methods by a large margin with a relative improvement of 26.9% in geometry and
38.6% in semantic.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 12:33:44 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Mar 2023 09:09:27 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Li",
"Bohan",
""
],
[
"Sun",
"Yasheng",
""
],
[
"Jin",
"Xin",
""
],
[
"Zeng",
"Wenjun",
""
],
[
"Zhu",
"Zheng",
""
],
[
"Wang",
"Xiaoefeng",
""
],
[
"Zhang",
"Yunpeng",
""
],
[
"Okae",
"James",
""
],
[
"Xiao",
"Hang",
""
],
[
"Du",
"Dalong",
""
]
] |
new_dataset
| 0.996352 |
2303.15750
|
Hemn Abdalla
|
Ozelot Vanilla, Jingxiang Yu, Hemn Barzan Abdalla, Haozhe Cui
|
Cesno: Possibility of Creating a New Programming Language
|
20 pages, 1 figure, 5 tables
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Programming languages are incredibly versatile, enabling developers to create
applications and programs that suit their individual requirements. This article
introduces a new language called Cesno, designed from the ground up to offer an
advanced, user-friendly, and easy-to-use programming environment. Cesno's
syntax is similar to other popular languages, making it simple to learn and
work with. It incorporates features from other languages, such as syntactic
sugar, a built-in library, support for functional programming, object-oriented
programming, dynamic typing, a type system, and a variety of function
parameters and restrictions. This article will explore the design of Cesno's
grammar, provide a brief overview of how Cesno processes and compiles code, and
provide examples of what Cesno's code looks like and how it can aid in
development.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 06:13:16 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Mar 2023 01:32:37 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Mar 2023 07:20:07 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Vanilla",
"Ozelot",
""
],
[
"Yu",
"Jingxiang",
""
],
[
"Abdalla",
"Hemn Barzan",
""
],
[
"Cui",
"Haozhe",
""
]
] |
new_dataset
| 0.996736 |
2303.16940
|
James Giroux
|
James Giroux, Martin Bouchard, Robert Laganiere
|
T-FFTRadNet: Object Detection with Swin Vision Transformers from Raw ADC
Radar Signals
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Object detection utilizing Frequency Modulated Continous Wave radar is
becoming increasingly popular in the field of autonomous systems. Radar does
not possess the same drawbacks seen by other emission-based sensors such as
LiDAR, primarily the degradation or loss of return signals due to weather
conditions such as rain or snow. However, radar does possess traits that make
it unsuitable for standard emission-based deep learning representations such as
point clouds. Radar point clouds tend to be sparse and therefore information
extraction is not efficient. To overcome this, more traditional digital signal
processing pipelines were adapted to form inputs residing directly in the
frequency domain via Fast Fourier Transforms. Commonly, three transformations
were used to form Range-Azimuth-Doppler cubes in which deep learning algorithms
could perform object detection. This too has drawbacks, namely the
pre-processing costs associated with performing multiple Fourier Transforms and
normalization. We explore the possibility of operating on raw radar inputs from
analog to digital converters via the utilization of complex transformation
layers. Moreover, we introduce hierarchical Swin Vision transformers to the
field of radar object detection and show their capability to operate on inputs
varying in pre-processing, along with different radar configurations, i.e.
relatively low and high numbers of transmitters and receivers, while obtaining
on par or better results than the state-of-the-art.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 18:04:19 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Giroux",
"James",
""
],
[
"Bouchard",
"Martin",
""
],
[
"Laganiere",
"Robert",
""
]
] |
new_dataset
| 0.998951 |
2303.16949
|
Irfansha Shaik
|
Irfansha Shaik and Jaco van de Pol
|
Concise QBF Encodings for Games on a Grid (extended version)
|
15 pages (main paper), 20 listings, 3 figures, 3 tables and 2
appendix sections
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Encoding 2-player games in QBF correctly and efficiently is challenging and
error-prone. To enable concise specifications and uniform encodings of games
played on grid boards, like Tic-Tac-Toe, Connect-4, Domineering, Pursuer-Evader
and Breakthrough, we introduce Board-game Domain Definition Language (BDDL),
inspired by the success of PDDL in the planning domain.
We provide an efficient translation from BDDL into QBF, encoding the
existence of a winning strategy of bounded depth. Our lifted encoding treats
board positions symbolically and allows concise definitions of conditions,
effects and winning configurations, relative to symbolic board positions. The
size of the encoding grows linearly in the input model and the considered
depth.
To show the feasibility of such a generic approach, we use QBF solvers to
compute the critical depths of winning strategies for instances of several
known games. For several games, our work provides the first QBF encoding.
Unlike plan validation in SAT-based planning, validating QBF-based winning
strategies is difficult. We show how to validate winning strategies using QBF
certificates and interactive game play.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 18:11:41 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Shaik",
"Irfansha",
""
],
[
"van de Pol",
"Jaco",
""
]
] |
new_dataset
| 0.99696 |
2303.17061
|
Zhenhua Chen
|
Zhenhua Chen and David Crandall
|
A Tensor-based Convolutional Neural Network for Small Dataset
Classification
| null | null | null | null |
cs.CV cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
Inspired by the ConvNets with structured hidden representations, we propose a
Tensor-based Neural Network, TCNN. Different from ConvNets, TCNNs are composed
of structured neurons rather than scalar neurons, and the basic operation is
neuron tensor transformation. Unlike other structured ConvNets, where the
part-whole relationships are modeled explicitly, the relationships are learned
implicitly in TCNNs. Also, the structured neurons in TCNNs are high-rank
tensors rather than vectors or matrices. We compare TCNNs with current popular
ConvNets, including ResNets, MobileNets, EfficientNets, RegNets, etc., on
CIFAR10, CIFAR100, and Tiny ImageNet. The experiment shows that TCNNs have
higher efficiency in terms of parameters. TCNNs also show higher robustness
against white-box adversarial attacks on MNIST compared to ConvNets.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 23:23:01 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Chen",
"Zhenhua",
""
],
[
"Crandall",
"David",
""
]
] |
new_dataset
| 0.991582 |
2303.17075
|
Lenore Blum
|
Lenore Blum, Manuel Blum
|
Viewpoint: A Theoretical Computer Science Perspective on Consciousness
and Artificial General Intelligence
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We have defined the Conscious Turing Machine (CTM) for the purpose of
investigating a Theoretical Computer Science (TCS) approach to consciousness.
For this, we have hewn to the TCS demand for simplicity and understandability.
The CTM is consequently and intentionally a simple machine. It is not a model
of the brain, though its design has greatly benefited - and continues to
benefit - from neuroscience and psychology. The CTM is a model of and for
consciousness.
Although it is developed to understand consciousness, the CTM offers a
thoughtful and novel guide to the creation of an Artificial General
Intelligence (AGI). For example, the CTM has an enormous number of powerful
processors, some with specialized expertise, others unspecialized but poised to
develop an expertise. For whatever problem must be dealt with, the CTM has an
excellent way to utilize those processors that have the required knowledge,
ability, and time to work on the problem, even if it is not aware of which ones
these may be.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 00:39:10 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Blum",
"Lenore",
""
],
[
"Blum",
"Manuel",
""
]
] |
new_dataset
| 0.980553 |
2303.17096
|
Xiaodan Li
|
Xiaodan Li, Yuefeng Chen, Yao Zhu, Shuhui Wang, Rong Zhang, Hui Xue
|
ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing
|
Accepted by CVPR2023
|
CVPR 2023
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recent studies have shown that higher accuracy on ImageNet usually leads to
better robustness against different corruptions. Therefore, in this paper,
instead of following the traditional research paradigm that investigates new
out-of-distribution corruptions or perturbations deep models may encounter, we
conduct model debugging in in-distribution data to explore which object
attributes a model may be sensitive to. To achieve this goal, we create a
toolkit for object editing with controls of backgrounds, sizes, positions, and
directions, and create a rigorous benchmark named ImageNet-E(diting) for
evaluating the image classifier robustness in terms of object attributes. With
our ImageNet-E, we evaluate the performance of current deep learning models,
including both convolutional neural networks and vision transformers. We find
that most models are quite sensitive to attribute changes. A small change in
the background can lead to an average of 9.23\% drop on top-1 accuracy. We also
evaluate some robust models including both adversarially trained models and
other robust trained models and find that some models show worse robustness
against attribute changes than vanilla models. Based on these findings, we
discover ways to enhance attribute robustness with preprocessing, architecture
designs, and training strategies. We hope this work can provide some insights
to the community and open up a new avenue for research in robust computer
vision. The code and dataset are available at
https://github.com/alibaba/easyrobust.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 02:02:32 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Li",
"Xiaodan",
""
],
[
"Chen",
"Yuefeng",
""
],
[
"Zhu",
"Yao",
""
],
[
"Wang",
"Shuhui",
""
],
[
"Zhang",
"Rong",
""
],
[
"Xue",
"Hui",
""
]
] |
new_dataset
| 0.973351 |
2303.17099
|
Hongxiang Cai
|
Hongxiang Cai, Zeyuan Zhang, Zhenyu Zhou, Ziyin Li, Wenbo Ding, Jiuhua
Zhao
|
BEVFusion4D: Learning LiDAR-Camera Fusion Under Bird's-Eye-View via
Cross-Modality Guidance and Temporal Aggregation
|
13 pages, 7 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Integrating LiDAR and Camera information into Bird's-Eye-View (BEV) has
become an essential topic for 3D object detection in autonomous driving.
Existing methods mostly adopt an independent dual-branch framework to generate
LiDAR and camera BEV, then perform an adaptive modality fusion. Since point
clouds provide more accurate localization and geometry information, they could
serve as a reliable spatial prior to acquiring relevant semantic information
from the images. Therefore, we design a LiDAR-Guided View Transformer (LGVT) to
effectively obtain the camera representation in BEV space and thus benefit the
whole dual-branch fusion system. LGVT takes camera BEV as the primitive
semantic query, repeatedly leveraging the spatial cue of LiDAR BEV for
extracting image features across multiple camera views. Moreover, we extend our
framework into the temporal domain with our proposed Temporal Deformable
Alignment (TDA) module, which aims to aggregate BEV features from multiple
historical frames. Including these two modules, our framework dubbed
BEVFusion4D achieves state-of-the-art results in 3D object detection, with
72.0% mAP and 73.5% NDS on the nuScenes validation set, and 73.3% mAP and 74.7%
NDS on nuScenes test set, respectively.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 02:18:07 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Cai",
"Hongxiang",
""
],
[
"Zhang",
"Zeyuan",
""
],
[
"Zhou",
"Zhenyu",
""
],
[
"Li",
"Ziyin",
""
],
[
"Ding",
"Wenbo",
""
],
[
"Zhao",
"Jiuhua",
""
]
] |
new_dataset
| 0.961863 |
2303.17147
|
Yao Yao None
|
Jingyang Zhang, Yao Yao, Shiwei Li, Jingbo Liu, Tian Fang, David
McKinnon, Yanghai Tsin, Long Quan
|
NeILF++: Inter-Reflectable Light Fields for Geometry and Material
Estimation
|
Project page: \url{https://yoyo000.github.io/NeILF_pp}
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel differentiable rendering framework for joint geometry,
material, and lighting estimation from multi-view images. In contrast to
previous methods which assume a simplified environment map or co-located
flashlights, in this work, we formulate the lighting of a static scene as one
neural incident light field (NeILF) and one outgoing neural radiance field
(NeRF). The key insight of the proposed method is the union of the incident and
outgoing light fields through physically-based rendering and inter-reflections
between surfaces, making it possible to disentangle the scene geometry,
material, and lighting from image observations in a physically-based manner.
The proposed incident light and inter-reflection framework can be easily
applied to other NeRF systems. We show that our method can not only decompose
the outgoing radiance into incident lights and surface materials, but also
serve as a surface refinement module that further improves the reconstruction
detail of the neural surface. We demonstrate on several datasets that the
proposed method is able to achieve state-of-the-art results in terms of
geometry reconstruction quality, material estimation accuracy, and the fidelity
of novel view rendering.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 04:59:48 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Zhang",
"Jingyang",
""
],
[
"Yao",
"Yao",
""
],
[
"Li",
"Shiwei",
""
],
[
"Liu",
"Jingbo",
""
],
[
"Fang",
"Tian",
""
],
[
"McKinnon",
"David",
""
],
[
"Tsin",
"Yanghai",
""
],
[
"Quan",
"Long",
""
]
] |
new_dataset
| 0.971194 |
2303.17183
|
Magnus Sahlgren
|
Joey \"Ohman, Severine Verlinden, Ariel Ekgren, Amaru Cuba Gyllensten,
Tim Isbister, Evangelia Gogoulou, Fredrik Carlsson, Magnus Sahlgren
|
The Nordic Pile: A 1.2TB Nordic Dataset for Language Modeling
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pre-training Large Language Models (LLMs) require massive amounts of text
data, and the performance of the LLMs typically correlates with the scale and
quality of the datasets. This means that it may be challenging to build LLMs
for smaller languages such as Nordic ones, where the availability of text
corpora is limited. In order to facilitate the development of the LLMS in the
Nordic languages, we curate a high-quality dataset consisting of 1.2TB of text,
in all of the major North Germanic languages (Danish, Icelandic, Norwegian, and
Swedish), as well as some high-quality English data. This paper details our
considerations and processes for collecting, cleaning, and filtering the
dataset.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 06:42:22 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Öhman",
"Joey",
""
],
[
"Verlinden",
"Severine",
""
],
[
"Ekgren",
"Ariel",
""
],
[
"Gyllensten",
"Amaru Cuba",
""
],
[
"Isbister",
"Tim",
""
],
[
"Gogoulou",
"Evangelia",
""
],
[
"Carlsson",
"Fredrik",
""
],
[
"Sahlgren",
"Magnus",
""
]
] |
new_dataset
| 0.995625 |
2303.17189
|
Xi Li
|
Guangcong Zheng, Xianpan Zhou, Xuewei Li, Zhongang Qi, Ying Shan, Xi
Li
|
LayoutDiffusion: Controllable Diffusion Model for Layout-to-image
Generation
|
Accepted by CVPR2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, diffusion models have achieved great success in image synthesis.
However, when it comes to the layout-to-image generation where an image often
has a complex scene of multiple objects, how to make strong control over both
the global layout map and each detailed object remains a challenging task. In
this paper, we propose a diffusion model named LayoutDiffusion that can obtain
higher generation quality and greater controllability than the previous works.
To overcome the difficult multimodal fusion of image and layout, we propose to
construct a structural image patch with region information and transform the
patched image into a special layout to fuse with the normal layout in a unified
form. Moreover, Layout Fusion Module (LFM) and Object-aware Cross Attention
(OaCA) are proposed to model the relationship among multiple objects and
designed to be object-aware and position-sensitive, allowing for precisely
controlling the spatial related information. Extensive experiments show that
our LayoutDiffusion outperforms the previous SOTA methods on FID, CAS by
relatively 46.35%, 26.70% on COCO-stuff and 44.29%, 41.82% on VG. Code is
available at https://github.com/ZGCTroy/LayoutDiffusion.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 06:56:12 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Zheng",
"Guangcong",
""
],
[
"Zhou",
"Xianpan",
""
],
[
"Li",
"Xuewei",
""
],
[
"Qi",
"Zhongang",
""
],
[
"Shan",
"Ying",
""
],
[
"Li",
"Xi",
""
]
] |
new_dataset
| 0.964628 |
2303.17204
|
Subhadeep Ranjan Dev
|
Binay Bhattacharya, Sandip Das, and Subhadeep Ranjan Dev
|
A Subquadratic Time Algorithm for the Weighted $k$-Center Problem on
Cactus Graphs
|
Submitted to Theoretical Computer Science
| null | null | null |
cs.DS cs.CG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The weighted $k$-center problem in graphs is a classical facility location
problem where we place $k$ centers on the graph, which minimize the maximum
weighted distance of a vertex to its nearest center. We study this problem when
the underlying graph is a cactus with $n$ vertices and present an $O(n \log^2
n)$ time algorithm for the same. This time complexity improves upon the
$O(n^2)$ time algorithm by Ben-Moshe et al. [TCS 2007], which is the current
state-of-the-art.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 07:56:12 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Bhattacharya",
"Binay",
""
],
[
"Das",
"Sandip",
""
],
[
"Dev",
"Subhadeep Ranjan",
""
]
] |
new_dataset
| 0.950479 |
2303.17210
|
Hao Xu
|
Hao Xu, Xun Liu, Qinghai Zeng, Qiang Li, Shibin Ge, Guohua Zhou and
Raymond Forbes
|
DecentRAN: Decentralized Radio Access Network for 5.5G and beyond
| null | null | null | null |
cs.CR cs.NI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Radio Access Network faces challenges from privacy and flexible wide area and
local area network access. RAN is limited from providing local service directly
due to centralized design of cellular network and concerns of user privacy and
data security. DecentRAN or Decentralized Radio Access Network offers an
alternative perspective to cope with the emerging demands of 5G Non-public
Network and the hybrid deployment of 5GS and Wi-Fi in the campus network.
Starting from Public key as an Identity, independent mutual authentication
between UE and RAN are made possible in a privacy-preserving manner. With the
introduction of decentralized architecture and network functions using
blockchain and smart contracts, DecentRAN has ability to provide users with
locally managed, end-to-end encrypted 5G NPN and the potential connectivity to
Local Area Network via campus routers. Furthermore, the performance regarding
throughput and latency are discussed, offering the deployment guidance for
DecentRAN.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 08:13:29 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Xu",
"Hao",
""
],
[
"Liu",
"Xun",
""
],
[
"Zeng",
"Qinghai",
""
],
[
"Li",
"Qiang",
""
],
[
"Ge",
"Shibin",
""
],
[
"Zhou",
"Guohua",
""
],
[
"Forbes",
"Raymond",
""
]
] |
new_dataset
| 0.99957 |
2303.17225
|
Jie Qin
|
Jie Qin, Jie Wu, Pengxiang Yan, Ming Li, Ren Yuxi, Xuefeng Xiao,
Yitong Wang, Rui Wang, Shilei Wen, Xin Pan, Xingang Wang
|
FreeSeg: Unified, Universal and Open-Vocabulary Image Segmentation
|
Accepted by CVPR 2023; camera-ready version
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, open-vocabulary learning has emerged to accomplish segmentation for
arbitrary categories of text-based descriptions, which popularizes the
segmentation system to more general-purpose application scenarios. However,
existing methods devote to designing specialized architectures or parameters
for specific segmentation tasks. These customized design paradigms lead to
fragmentation between various segmentation tasks, thus hindering the uniformity
of segmentation models. Hence in this paper, we propose FreeSeg, a generic
framework to accomplish Unified, Universal and Open-Vocabulary Image
Segmentation. FreeSeg optimizes an all-in-one network via one-shot training and
employs the same architecture and parameters to handle diverse segmentation
tasks seamlessly in the inference procedure. Additionally, adaptive prompt
learning facilitates the unified model to capture task-aware and
category-sensitive concepts, improving model robustness in multi-task and
varied scenarios. Extensive experimental results demonstrate that FreeSeg
establishes new state-of-the-art results in performance and generalization on
three segmentation tasks, which outperforms the best task-specific
architectures by a large margin: 5.5% mIoU on semantic segmentation, 17.6% mAP
on instance segmentation, 20.1% PQ on panoptic segmentation for the unseen
class on COCO.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 08:42:49 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Qin",
"Jie",
""
],
[
"Wu",
"Jie",
""
],
[
"Yan",
"Pengxiang",
""
],
[
"Li",
"Ming",
""
],
[
"Yuxi",
"Ren",
""
],
[
"Xiao",
"Xuefeng",
""
],
[
"Wang",
"Yitong",
""
],
[
"Wang",
"Rui",
""
],
[
"Wen",
"Shilei",
""
],
[
"Pan",
"Xin",
""
],
[
"Wang",
"Xingang",
""
]
] |
new_dataset
| 0.992335 |
2303.17252
|
Venus Pasandi
|
Venus Pasandi and Daniele Pucci
|
Torque Control with Joints Position and Velocity Limits Avoidance
|
To be published in IEEE-ICRA 2023 proceedings, 7 pages with 6 figures
| null | null | null |
cs.RO math.DS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The design of a control architecture for providing the desired motion along
with the realization of the joint limitation of a robotic system is still an
open challenge in control and robotics. This paper presents a torque control
architecture for fully actuated manipulators for tracking the desired
time-varying trajectory while ensuring the joints position and velocity limits.
The presented architecture stems from the parametrization of the feasible
joints position and velocity space by exogenous states. The proposed
parametrization transforms the control problem with constrained states to an
un-constrained one by replacing the joints position and velocity with the
exogenous states. With the help of Lyapunov-based arguments, we prove that the
proposed control architecture ensures the stability and convergence of the
desired joint trajectory along with the joints position and velocity limits
avoidance. We validate the performance of proposed architecture through various
simulations on a simple two-degree-of-freedom manipulator and the humanoid
robot iCub.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 09:30:26 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Pasandi",
"Venus",
""
],
[
"Pucci",
"Daniele",
""
]
] |
new_dataset
| 0.992078 |
2303.17294
|
Yifu Liu
|
Yifu Liu, Xiaoxia Li, Zhiling Luo, Wei Zhou
|
JCDNet: Joint of Common and Definite phases Network for Weakly
Supervised Temporal Action Localization
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Weakly-supervised temporal action localization aims to localize action
instances in untrimmed videos with only video-level supervision. We witness
that different actions record common phases, e.g., the run-up in the HighJump
and LongJump. These different actions are defined as conjoint actions, whose
rest parts are definite phases, e.g., leaping over the bar in a HighJump.
Compared with the common phases, the definite phases are more easily localized
in existing researches. Most of them formulate this task as a Multiple Instance
Learning paradigm, in which the common phases are tended to be confused with
the background, and affect the localization completeness of the conjoint
actions. To tackle this challenge, we propose a Joint of Common and Definite
phases Network (JCDNet) by improving feature discriminability of the conjoint
actions. Specifically, we design a Class-Aware Discriminative module to enhance
the contribution of the common phases in classification by the guidance of the
coarse definite-phase features. Besides, we introduce a temporal attention
module to learn robust action-ness scores via modeling temporal dependencies,
distinguishing the common phases from the background. Extensive experiments on
three datasets (THUMOS14, ActivityNetv1.2, and a conjoint-action subset)
demonstrate that JCDNet achieves competitive performance against the
state-of-the-art methods. Keywords: weakly-supervised learning, temporal action
localization, conjoint action
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 11:09:02 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Liu",
"Yifu",
""
],
[
"Li",
"Xiaoxia",
""
],
[
"Luo",
"Zhiling",
""
],
[
"Zhou",
"Wei",
""
]
] |
new_dataset
| 0.991311 |
2303.17314
|
Stefano Maria Nicoletti
|
Stefano M. Nicoletti and Milan Lopuha\"a-Zwakenberg and E. Moritz Hahn
and Mari\"elle Stoelinga
|
PFL: a Probabilistic Logic for Fault Trees
|
arXiv admin note: text overlap with arXiv:2208.13424
|
In: Chechik, M., Katoen, JP., Leucker, M. (eds) Formal Methods. FM
2023. Lecture Notes in Computer Science, vol 14000. Springer, Cham
|
10.1007/978-3-031-27481-7_13
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Safety-critical infrastructures must operate in a safe and reliable way.
Fault tree analysis is a widespread method used for risk assessment of these
systems: fault trees (FTs) are required by, e.g., the Federal Aviation
Administration and the Nuclear Regulatory Commission. In spite of their
popularity, little work has been done on formulating structural queries about
FT and analyzing these, e.g., when evaluating potential scenarios, and to give
practitioners instruments to formulate queries on FTs in an understandable yet
powerful way. In this paper, we aim to fill this gap by extending BFL [32], a
logic that reasons about Boolean FTs. To do so, we introduce a Probabilistic
Fault tree Logic (PFL). PFL is a simple, yet expressive logic that supports
easier formulation of complex scenarios and specification of FT properties that
comprise probabilities. Alongside PFL, we present LangPFL, a domain specific
language to further ease property specification. We showcase PFL and LangPFL by
applying them to a COVID-19 related FT and to a FT for an oil/gas pipeline.
Finally, we present theory and model checking algorithms based on binary
decision diagrams (BDDs).
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 12:07:34 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Nicoletti",
"Stefano M.",
""
],
[
"Lopuhaä-Zwakenberg",
"Milan",
""
],
[
"Hahn",
"E. Moritz",
""
],
[
"Stoelinga",
"Mariëlle",
""
]
] |
new_dataset
| 0.998888 |
2303.17316
|
Huiyu Duan
|
Huiyu Duan, Wei Shen, Xiongkuo Min, Danyang Tu, Long Teng, Jia Wang,
Guangtao Zhai
|
Masked Autoencoders as Image Processors
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformers have shown significant effectiveness for various vision tasks
including both high-level vision and low-level vision. Recently, masked
autoencoders (MAE) for feature pre-training have further unleashed the
potential of Transformers, leading to state-of-the-art performances on various
high-level vision tasks. However, the significance of MAE pre-training on
low-level vision tasks has not been sufficiently explored. In this paper, we
show that masked autoencoders are also scalable self-supervised learners for
image processing tasks. We first present an efficient Transformer model
considering both channel attention and shifted-window-based self-attention
termed CSformer. Then we develop an effective MAE architecture for image
processing (MAEIP) tasks. Extensive experimental results show that with the
help of MAEIP pre-training, our proposed CSformer achieves state-of-the-art
performance on various image processing tasks, including Gaussian denoising,
real image denoising, single-image motion deblurring, defocus deblurring, and
image deraining.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 12:09:35 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Duan",
"Huiyu",
""
],
[
"Shen",
"Wei",
""
],
[
"Min",
"Xiongkuo",
""
],
[
"Tu",
"Danyang",
""
],
[
"Teng",
"Long",
""
],
[
"Wang",
"Jia",
""
],
[
"Zhai",
"Guangtao",
""
]
] |
new_dataset
| 0.984625 |
2303.17334
|
Xinxin Hu
|
Xinxin Hu, Haotian Chen, Junjie Zhang, Hongchang Chen, Shuxin Liu,
Xing Li, Yahui Wang, and Xiangyang Xue
|
GAT-COBO: Cost-Sensitive Graph Neural Network for Telecom Fraud
Detection
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Along with the rapid evolution of mobile communication technologies, such as
5G, there has been a drastically increase in telecom fraud, which significantly
dissipates individual fortune and social wealth. In recent years, graph mining
techniques are gradually becoming a mainstream solution for detecting telecom
fraud. However, the graph imbalance problem, caused by the Pareto principle,
brings severe challenges to graph data mining. This is a new and challenging
problem, but little previous work has been noticed. In this paper, we propose a
Graph ATtention network with COst-sensitive BOosting (GAT-COBO) for the graph
imbalance problem. First, we design a GAT-based base classifier to learn the
embeddings of all nodes in the graph. Then, we feed the embeddings into a
well-designed cost-sensitive learner for imbalanced learning. Next, we update
the weights according to the misclassification cost to make the model focus
more on the minority class. Finally, we sum the node embeddings obtained by
multiple cost-sensitive learners to obtain a comprehensive node representation,
which is used for the downstream anomaly detection task. Extensive experiments
on two real-world telecom fraud detection datasets demonstrate that our
proposed method is effective for the graph imbalance problem, outperforming the
state-of-the-art GNNs and GNN-based fraud detectors. In addition, our model is
also helpful for solving the widespread over-smoothing problem in GNNs. The
GAT-COBO code and datasets are available at https://github.com/xxhu94/GAT-COBO.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 07:02:50 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Hu",
"Xinxin",
""
],
[
"Chen",
"Haotian",
""
],
[
"Zhang",
"Junjie",
""
],
[
"Chen",
"Hongchang",
""
],
[
"Liu",
"Shuxin",
""
],
[
"Li",
"Xing",
""
],
[
"Wang",
"Yahui",
""
],
[
"Xue",
"Xiangyang",
""
]
] |
new_dataset
| 0.983342 |
2303.17373
|
Pierre-Victor Besson
|
Pierre-Victor Besson, Val\'erie Viet Triem Tong, Gilles Guette,
Guillaume Piolle, Erwan Abgrall
|
URSID: Using formalism to Refine attack Scenarios for vulnerable
Infrastructure Deployment
|
13 pages, 9 figures
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we propose a novel way of deploying vulnerable architectures
for defense and research purposes, which aims to generate deception platforms
based on the formal description of a scenario. An attack scenario is described
by an attack graph in which transitions are labeled by ATT&CK techniques or
procedures. The state of the attacker is modeled as a set of secrets he
acquires and a set of nodes he controls. Descriptions of a single scenario on a
technical level can then be declined into several different scenarios on a
procedural level, and each of these scenarios can be deployed into its own
vulnerable architecture. To achieve this goal we introduce the notion of
architecture constraints, as some procedures may only be exploited on system
presenting special properties, such as having a specific operating system
version. Finally, we present our deployment process for converting one of these
scenarios into a vulnerable infrastructure, and offer an online proof of
concept demonstration of our tool, where readers may deploy locally deploy a
complete scenario inspired by the threat actor APT-29.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 13:41:15 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Besson",
"Pierre-Victor",
""
],
[
"Tong",
"Valérie Viet Triem",
""
],
[
"Guette",
"Gilles",
""
],
[
"Piolle",
"Guillaume",
""
],
[
"Abgrall",
"Erwan",
""
]
] |
new_dataset
| 0.999424 |
2303.17388
|
Linyue Liu
|
Linyue Liu, Xi Guo, Chun Ouyang, Patrick C. K. Hung, Hong-Yu Zhang,
Keqing He, Chen Mo and Zaiwen Feng
|
BPCE: A Prototype for Co-Evolution between Business Process Variants
through Configurable Process Model
|
18 pages , 11 figures
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the continuous development of business process management technology,
the increasing business process models are usually owned by large enterprises.
In large enterprises, different stakeholders may modify the same business
process model. In order to better manage the changeability of processes, they
adopt configurable business process models to manage process variants. However,
the process variants will vary with the change in enterprise business demands.
Therefore, it is necessary to explore the co-evolution of the process variants
so as to effectively manage the business process family. To this end, a novel
framework for co-evolution between business process variants through a
configurable process model is proposed in this work. First, the mapping
relationship between process variants and configurable models is standardized
in this study. A series of change operations and change propagation operations
between process variants and configurable models are further defined for
achieving propagation. Then, an overall algorithm is proposed for achieving
co-evolution of process variants. Next, a prototype is developed for managing
change synchronization between process variants and configurable process
models. Finally, the effectiveness and efficiency of our proposed process
change propagation method are verified based on experiments on two business
process datasets. The experimental results show that our approach implements
the co-evolution of process variants with high accuracy and efficiency.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 13:59:34 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Liu",
"Linyue",
""
],
[
"Guo",
"Xi",
""
],
[
"Ouyang",
"Chun",
""
],
[
"Hung",
"Patrick C. K.",
""
],
[
"Zhang",
"Hong-Yu",
""
],
[
"He",
"Keqing",
""
],
[
"Mo",
"Chen",
""
],
[
"Feng",
"Zaiwen",
""
]
] |
new_dataset
| 0.97976 |
2303.17472
|
Qitao Zhao
|
Qitao Zhao, Ce Zheng, Mengyuan Liu, Pichao Wang, Chen Chen
|
PoseFormerV2: Exploring Frequency Domain for Efficient and Robust 3D
Human Pose Estimation
|
Accepted to CVPR 2023. Project page:
https://qitaozhao.github.io/PoseFormerV2
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, transformer-based methods have gained significant success in
sequential 2D-to-3D lifting human pose estimation. As a pioneering work,
PoseFormer captures spatial relations of human joints in each video frame and
human dynamics across frames with cascaded transformer layers and has achieved
impressive performance. However, in real scenarios, the performance of
PoseFormer and its follow-ups is limited by two factors: (a) The length of the
input joint sequence; (b) The quality of 2D joint detection. Existing methods
typically apply self-attention to all frames of the input sequence, causing a
huge computational burden when the frame number is increased to obtain advanced
estimation accuracy, and they are not robust to noise naturally brought by the
limited capability of 2D joint detectors. In this paper, we propose
PoseFormerV2, which exploits a compact representation of lengthy skeleton
sequences in the frequency domain to efficiently scale up the receptive field
and boost robustness to noisy 2D joint detection. With minimum modifications to
PoseFormer, the proposed method effectively fuses features both in the time
domain and frequency domain, enjoying a better speed-accuracy trade-off than
its precursor. Extensive experiments on two benchmark datasets (i.e., Human3.6M
and MPI-INF-3DHP) demonstrate that the proposed approach significantly
outperforms the original PoseFormer and other transformer-based variants. Code
is released at \url{https://github.com/QitaoZhao/PoseFormerV2}.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 15:45:51 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Zhao",
"Qitao",
""
],
[
"Zheng",
"Ce",
""
],
[
"Liu",
"Mengyuan",
""
],
[
"Wang",
"Pichao",
""
],
[
"Chen",
"Chen",
""
]
] |
new_dataset
| 0.96392 |
2303.17540
|
Ruozhou Yu
|
Huayue Gu, Ruozhou Yu, Zhouyu Li, Xiaojian Wang, Fangtong Zhou
|
ESDI: Entanglement Scheduling and Distribution in the Quantum Internet
| null | null | null | null |
cs.NI quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quantum entanglement distribution between remote nodes is key to many
promising quantum applications. Existing mechanisms have mainly focused on
improving throughput and fidelity via entanglement routing or single-node
scheduling. This paper considers entanglement scheduling and distribution among
many source-destination pairs with different requests over an entire quantum
network topology. Two practical scenarios are considered. When requests do not
have deadlines, we seek to minimize the average completion time of the
communication requests. If deadlines are specified, we seek to maximize the
number of requests whose deadlines are met. Inspired by optimal scheduling
disciplines in conventional single-queue scenarios, we design a general
optimization framework for entanglement scheduling and distribution called
ESDI, and develop a probabilistic protocol to implement the optimized solutions
in a general buffered quantum network. We develop a discrete-time quantum
network simulator for evaluation. Results show the superior performance of ESDI
compared to existing solutions.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 17:09:59 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Gu",
"Huayue",
""
],
[
"Yu",
"Ruozhou",
""
],
[
"Li",
"Zhouyu",
""
],
[
"Wang",
"Xiaojian",
""
],
[
"Zhou",
"Fangtong",
""
]
] |
new_dataset
| 0.999698 |
2303.17561
|
Yuting Gao
|
Yuting Gao, Jinfeng Liu, Zihan Xu, Tong Wu, Wei Liu, Jie Yang, Ke Li,
Xing Sun
|
SoftCLIP: Softer Cross-modal Alignment Makes CLIP Stronger
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
During the preceding biennium, vision-language pre-training has achieved
noteworthy success on several downstream tasks. Nevertheless, acquiring
high-quality image-text pairs, where the pairs are entirely exclusive of each
other, remains a challenging task, and noise exists in the commonly used
datasets. To address this issue, we propose SoftCLIP, a novel approach that
relaxes the strict one-to-one constraint and achieves a soft cross-modal
alignment by introducing a softened target, which is generated from the
fine-grained intra-modal self-similarity. The intra-modal guidance is
indicative to enable two pairs have some local similarities and model
many-to-many relationships between the two modalities. Besides, since the
positive still dominates in the softened target distribution, we disentangle
the negatives in the distribution to further boost the relation alignment with
the negatives in the cross-modal learning. Extensive experiments demonstrate
the effectiveness of SoftCLIP. In particular, on ImageNet zero-shot
classification task, using CC3M/CC12M as pre-training dataset, SoftCLIP brings
a top-1 accuracy improvement of 6.8%/7.2% over the CLIP baseline.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 17:27:22 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Gao",
"Yuting",
""
],
[
"Liu",
"Jinfeng",
""
],
[
"Xu",
"Zihan",
""
],
[
"Wu",
"Tong",
""
],
[
"Liu",
"Wei",
""
],
[
"Yang",
"Jie",
""
],
[
"Li",
"Ke",
""
],
[
"Sun",
"Xing",
""
]
] |
new_dataset
| 0.999843 |
2303.17563
|
Andrew Adamatzky
|
Panagiotis Mougkogiannis and Andrew Adamatzky
|
Light induced spiking of proteinoids
| null | null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Proteinoids, or thermal proteins, are produced by heating amino acids to
their melting point and initiation of polymerisation to produce polymeric
chains. In aqueous solutions proteinoids swell into hollow microspheres. These
microspheres produce endogenous burst of electrical potential spikes and change
patterns of their electrical activity in response to illumination. We report
results of detailed investigation on the effects of white cold light on the
spiking of proteinoids. We study how different types and intensities of light
determine proteinoids' spiking amplitude, period, and pattern. The results of
this study will be utilised to evaluate proteinoids for their potential as
optical sensors and their application in unconventional computing.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 17:29:37 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Mougkogiannis",
"Panagiotis",
""
],
[
"Adamatzky",
"Andrew",
""
]
] |
new_dataset
| 0.983567 |
2303.17568
|
Qinkai Zheng
|
Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue,
Zihan Wang, Lei Shen, Andi Wang, Yang Li, Teng Su, Zhilin Yang, Jie Tang
|
CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual
Evaluations on HumanEval-X
| null | null | null | null |
cs.LG cs.AI cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Large pre-trained code generation models, such as OpenAI Codex, can generate
syntax- and function-correct code, making the coding of programmers more
productive and our pursuit of artificial general intelligence closer. In this
paper, we introduce CodeGeeX, a multilingual model with 13 billion parameters
for code generation. CodeGeeX is pre-trained on 850 billion tokens of 23
programming languages as of June 2022. Our extensive experiments suggest that
CodeGeeX outperforms multilingual code models of similar scale for both the
tasks of code generation and translation on HumanEval-X. Building upon
HumanEval (Python only), we develop the HumanEval-X benchmark for evaluating
multilingual models by hand-writing the solutions in C++, Java, JavaScript, and
Go. In addition, we build CodeGeeX-based extensions on Visual Studio Code,
JetBrains, and Cloud Studio, generating 4.7 billion tokens for tens of
thousands of active users per week. Our user study demonstrates that CodeGeeX
can help to increase coding efficiency for 83.4% of its users. Finally,
CodeGeeX is publicly accessible and in Sep. 2022, we open-sourced its code,
model weights (the version of 850B tokens), API, extensions, and HumanEval-X at
https://github.com/THUDM/CodeGeeX.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 17:34:01 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Zheng",
"Qinkai",
""
],
[
"Xia",
"Xiao",
""
],
[
"Zou",
"Xu",
""
],
[
"Dong",
"Yuxiao",
""
],
[
"Wang",
"Shan",
""
],
[
"Xue",
"Yufei",
""
],
[
"Wang",
"Zihan",
""
],
[
"Shen",
"Lei",
""
],
[
"Wang",
"Andi",
""
],
[
"Li",
"Yang",
""
],
[
"Su",
"Teng",
""
],
[
"Yang",
"Zhilin",
""
],
[
"Tang",
"Jie",
""
]
] |
new_dataset
| 0.966234 |
2303.17583
|
Sachin Shah
|
Sachin Shah, Sakshum Kulshrestha, Christopher A. Metzler
|
TiDy-PSFs: Computational Imaging with Time-Averaged Dynamic
Point-Spread-Functions
|
13 pages, 16 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Point-spread-function (PSF) engineering is a powerful computational imaging
techniques wherein a custom phase mask is integrated into an optical system to
encode additional information into captured images. Used in combination with
deep learning, such systems now offer state-of-the-art performance at monocular
depth estimation, extended depth-of-field imaging, lensless imaging, and other
tasks. Inspired by recent advances in spatial light modulator (SLM) technology,
this paper answers a natural question: Can one encode additional information
and achieve superior performance by changing a phase mask dynamically over
time? We first prove that the set of PSFs described by static phase masks is
non-convex and that, as a result, time-averaged PSFs generated by dynamic phase
masks are fundamentally more expressive. We then demonstrate, in simulation,
that time-averaged dynamic (TiDy) phase masks can offer substantially improved
monocular depth estimation and extended depth-of-field imaging performance.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 17:51:07 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Shah",
"Sachin",
""
],
[
"Kulshrestha",
"Sakshum",
""
],
[
"Metzler",
"Christopher A.",
""
]
] |
new_dataset
| 0.996505 |
2303.17594
|
Tianheng Cheng
|
Renhong Zhang, Tianheng Cheng, Shusheng Yang, Haoyi Jiang, Shuai
Zhang, Jiancheng Lyu, Xin Li, Xiaowen Ying, Dashan Gao, Wenyu Liu, Xinggang
Wang
|
MobileInst: Video Instance Segmentation on the Mobile
|
Preprint. 12 pages, 7 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Although recent approaches aiming for video instance segmentation have
achieved promising results, it is still difficult to employ those approaches
for real-world applications on mobile devices, which mainly suffer from (1)
heavy computation and memory cost and (2) complicated heuristics for tracking
objects. To address those issues, we present MobileInst, a lightweight and
mobile-friendly framework for video instance segmentation on mobile devices.
Firstly, MobileInst adopts a mobile vision transformer to extract multi-level
semantic features and presents an efficient query-based dual-transformer
instance decoder for mask kernels and a semantic-enhanced mask decoder to
generate instance segmentation per frame. Secondly, MobileInst exploits simple
yet effective kernel reuse and kernel association to track objects for video
instance segmentation. Further, we propose temporal query passing to enhance
the tracking ability for kernels. We conduct experiments on COCO and
YouTube-VIS datasets to demonstrate the superiority of MobileInst and evaluate
the inference latency on a mobile CPU core of Qualcomm Snapdragon-778G, without
other methods of acceleration. On the COCO dataset, MobileInst achieves 30.5
mask AP and 176 ms on the mobile CPU, which reduces the latency by 50% compared
to the previous SOTA. For video instance segmentation, MobileInst achieves 35.0
AP on YouTube-VIS 2019 and 30.1 AP on YouTube-VIS 2021. Code will be available
to facilitate real-world applications and future research.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 17:59:02 GMT"
}
] | 2023-03-31T00:00:00 |
[
[
"Zhang",
"Renhong",
""
],
[
"Cheng",
"Tianheng",
""
],
[
"Yang",
"Shusheng",
""
],
[
"Jiang",
"Haoyi",
""
],
[
"Zhang",
"Shuai",
""
],
[
"Lyu",
"Jiancheng",
""
],
[
"Li",
"Xin",
""
],
[
"Ying",
"Xiaowen",
""
],
[
"Gao",
"Dashan",
""
],
[
"Liu",
"Wenyu",
""
],
[
"Wang",
"Xinggang",
""
]
] |
new_dataset
| 0.999445 |
1602.04877
|
Peng Wei
|
Peng Wei, Xiang-Gen Xia, Yue Xiao, Shaoqian Li
|
Fast DGT Based Receivers for GFDM in Broadband Channels
|
28 pages, 8 figures
| null |
10.1109/TCOMM.2016.2598568
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generalized frequency division multiplexing (GFDM) is a recent multicarrier
5G waveform candidate with flexibility of pulse shaping filters. However, the
flexibility of choosing a pulse shaping filter may result in inter carrier
interference (ICI) and inter symbol interference (ISI), which becomes more
severe in a broadband channel. In order to eliminate the ISI and ICI, based on
discrete Gabor transform (DGT), in this paper, a transmit GFDM signal is first
treated as an inverse DGT (IDGT), and then a frequency-domain DGT is formulated
to recover (as a receiver) the GFDM signal. Furthermore, to reduce the
complexity, a suboptimal frequency-domain DGT called local DGT (LDGT) is
developed. Some analyses are also given for the proposed DGT based receivers.
|
[
{
"version": "v1",
"created": "Tue, 16 Feb 2016 01:00:04 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Wei",
"Peng",
""
],
[
"Xia",
"Xiang-Gen",
""
],
[
"Xiao",
"Yue",
""
],
[
"Li",
"Shaoqian",
""
]
] |
new_dataset
| 0.986553 |
2202.03791
|
Uli Fahrenberg
|
Uli Fahrenberg, Christian Johansen, Georg Struth, Krzysztof
Ziemia\'nski
|
Kleene Theorem for Higher-Dimensional Automata
| null | null | null | null |
cs.FL math.AT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We prove a Kleene theorem for higher-dimensional automata. It states that the
languages they recognise are precisely the rational subsumption-closed sets of
finite interval pomsets. The rational operations on these languages include a
gluing composition, for which we equip pomsets with interfaces. For our proof,
we introduce higher-dimensional automata with interfaces, which are modelled as
presheaves over labelled precube categories, and develop tools and techniques
inspired by algebraic topology, such as cylinders and (co)fibrations.
Higher-dimensional automata form a general model of non-interleaving
concurrency, which subsumes many other approaches. Interval orders are used as
models for concurrent and distributed systems where events extend in time. Our
tools and techniques may therefore yield templates for Kleene theorems in
various models and applications.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 11:29:30 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Jul 2022 12:22:19 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Mar 2023 18:17:02 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Fahrenberg",
"Uli",
""
],
[
"Johansen",
"Christian",
""
],
[
"Struth",
"Georg",
""
],
[
"Ziemiański",
"Krzysztof",
""
]
] |
new_dataset
| 0.996609 |
2204.04274
|
Fabio Zanasi
|
Aleksandar Milosavljevic and Robin Piedeleu and Fabio Zanasi
|
String Diagram Rewriting Modulo Commutative (Co)monoid Structure
| null | null | null | null |
cs.LO math.CT
|
http://creativecommons.org/licenses/by/4.0/
|
String diagrams constitute an intuitive and expressive graphical syntax that
has found application in a very diverse range of fields including concurrency
theory, quantum computing, control theory, machine learning, linguistics, and
digital circuits. Rewriting theory for string diagrams relies on a
combinatorial interpretation as double-pushout rewriting of certain
hypergraphs. As previously studied, there is a `tension' in this
interpretation: in order to make it sound and complete, we either need to add
structure on string diagrams (in particular, Frobenius algebra structure) or
pose restrictions on double-pushout rewriting (resulting in `convex'
rewriting). From the string diagram viewpoint, imposing a full Frobenius
structure may not always be natural or desirable in applications, which
motivates our study of a weaker requirement: commutative monoid structure. In
this work we characterise string diagram rewriting modulo commutative monoid
equations, via a sound and complete interpretation in a suitable notion of
double-pushout rewriting of hypergraphs.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 20:04:21 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Mar 2023 09:20:29 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Milosavljevic",
"Aleksandar",
""
],
[
"Piedeleu",
"Robin",
""
],
[
"Zanasi",
"Fabio",
""
]
] |
new_dataset
| 0.996516 |
2206.06424
|
Mo Alloulah
|
Mohammed Alloulah, Maximilian Arnold
|
Look, Radiate, and Learn: Self-Supervised Localisation via Radio-Visual
Correspondence
|
To appear in IEEE/CVF CVPR '23
| null | null | null |
cs.LG cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Next generation cellular networks will implement radio sensing functions
alongside customary communications, thereby enabling unprecedented worldwide
sensing coverage outdoors. Deep learning has revolutionised computer vision but
has had limited application to radio perception tasks, in part due to lack of
systematic datasets and benchmarks dedicated to the study of the performance
and promise of radio sensing. To address this gap, we present MaxRay: a
synthetic radio-visual dataset and benchmark that facilitate precise target
localisation in radio. We further propose to learn to localise targets in radio
without supervision by extracting self-coordinates from radio-visual
correspondence. We use such self-supervised coordinates to train a radio
localiser network. We characterise our performance against a number of
state-of-the-art baselines. Our results indicate that accurate radio target
localisation can be automatically learned from paired radio-visual data without
labels, which is important for empirical data. This opens the door for vast
data scalability and may prove key to realising the promise of robust radio
sensing atop a unified communication-perception cellular infrastructure.
Dataset will be hosted on IEEE DataPort.
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2022 19:08:36 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Oct 2022 21:54:35 GMT"
},
{
"version": "v3",
"created": "Sun, 13 Nov 2022 21:34:39 GMT"
},
{
"version": "v4",
"created": "Wed, 29 Mar 2023 10:11:26 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Alloulah",
"Mohammed",
""
],
[
"Arnold",
"Maximilian",
""
]
] |
new_dataset
| 0.999376 |
2206.11736
|
Patrick Feeney
|
Patrick Feeney, Sarah Schneider, Panagiotis Lymperopoulos, Li-Ping
Liu, Matthias Scheutz, Michael C. Hughes
|
NovelCraft: A Dataset for Novelty Detection and Discovery in Open Worlds
|
Published in Transactions on Machine Learning Research (03/2023)
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In order for artificial agents to successfully perform tasks in changing
environments, they must be able to both detect and adapt to novelty. However,
visual novelty detection research often only evaluates on repurposed datasets
such as CIFAR-10 originally intended for object classification, where images
focus on one distinct, well-centered object. New benchmarks are needed to
represent the challenges of navigating the complex scenes of an open world. Our
new NovelCraft dataset contains multimodal episodic data of the images and
symbolic world-states seen by an agent completing a pogo stick assembly task
within a modified Minecraft environment. In some episodes, we insert novel
objects of varying size within the complex 3D scene that may impact gameplay.
Our visual novelty detection benchmark finds that methods that rank best on
popular area-under-the-curve metrics may be outperformed by simpler
alternatives when controlling false positives matters most. Further multimodal
novelty detection experiments suggest that methods that fuse both visual and
symbolic information can improve time until detection as well as overall
discrimination. Finally, our evaluation of recent generalized category
discovery methods suggests that adapting to new imbalanced categories in
complex scenes remains an exciting open problem.
|
[
{
"version": "v1",
"created": "Thu, 23 Jun 2022 14:31:33 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 14:53:51 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Mar 2023 18:27:24 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Feeney",
"Patrick",
""
],
[
"Schneider",
"Sarah",
""
],
[
"Lymperopoulos",
"Panagiotis",
""
],
[
"Liu",
"Li-Ping",
""
],
[
"Scheutz",
"Matthias",
""
],
[
"Hughes",
"Michael C.",
""
]
] |
new_dataset
| 0.999825 |
2210.05336
|
Alexander Gheorghiu
|
Alexander V. Gheorghiu and David J. Pym
|
Definite Formulae, Negation-as-Failure, and the Base-extension Semantics
of Intuitionistic Propositional Logic
|
submitted
| null | null | null |
cs.LO math.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Proof-theoretic semantics (P-tS) is the paradigm of semantics in which
meaning in logic is based on proof (as opposed to truth). A particular instance
of P-tS for intuitionistic propositional logic (IPL) is its base-extension
semantics (B-eS). This semantics is given by a relation called support,
explaining the meaning of the logical constants, which is parameterized by
systems of rules called bases that provide the semantics of atomic
propositions. In this paper, we interpret bases as collections of definite
formulae and use the operational view of the latter as provided by uniform
proof-search -- the proof-theoretic foundation of logic programming (LP) -- to
establish the completeness of IPL for the B-eS. This perspective allows
negation, a subtle issue in P-tS, to be understood in terms of the
negation-as-failure protocol in LP. Specifically, while the denial of a
proposition is traditionally understood as the assertion of its negation, in
B-eS we may understand the denial of a proposition as the failure to find a
proof of it. In this way, assertion and denial are both prime concepts in P-tS.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 10:59:15 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Mar 2023 18:37:47 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Gheorghiu",
"Alexander V.",
""
],
[
"Pym",
"David J.",
""
]
] |
new_dataset
| 0.987089 |
2211.08540
|
Rishabh Jain
|
Rishabh Jain, Krishna Kumar Singh, Mayur Hemani, Jingwan Lu, Mausoom
Sarkar, Duygu Ceylan, Balaji Krishnamurthy
|
VGFlow: Visibility guided Flow Network for Human Reposing
|
Selected for publication in CVPR2023
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The task of human reposing involves generating a realistic image of a person
standing in an arbitrary conceivable pose. There are multiple difficulties in
generating perceptually accurate images, and existing methods suffer from
limitations in preserving texture, maintaining pattern coherence, respecting
cloth boundaries, handling occlusions, manipulating skin generation, etc. These
difficulties are further exacerbated by the fact that the possible space of
pose orientation for humans is large and variable, the nature of clothing items
is highly non-rigid, and the diversity in body shape differs largely among the
population. To alleviate these difficulties and synthesize perceptually
accurate images, we propose VGFlow. Our model uses a visibility-guided flow
module to disentangle the flow into visible and invisible parts of the target
for simultaneous texture preservation and style manipulation. Furthermore, to
tackle distinct body shapes and avoid network artifacts, we also incorporate a
self-supervised patch-wise "realness" loss to improve the output. VGFlow
achieves state-of-the-art results as observed qualitatively and quantitatively
on different image quality metrics (SSIM, LPIPS, FID).
|
[
{
"version": "v1",
"created": "Sun, 13 Nov 2022 12:41:07 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Nov 2022 12:11:35 GMT"
},
{
"version": "v3",
"created": "Sat, 11 Mar 2023 09:40:41 GMT"
},
{
"version": "v4",
"created": "Tue, 28 Mar 2023 10:57:05 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Jain",
"Rishabh",
""
],
[
"Singh",
"Krishna Kumar",
""
],
[
"Hemani",
"Mayur",
""
],
[
"Lu",
"Jingwan",
""
],
[
"Sarkar",
"Mausoom",
""
],
[
"Ceylan",
"Duygu",
""
],
[
"Krishnamurthy",
"Balaji",
""
]
] |
new_dataset
| 0.98328 |
2212.04692
|
Toshihiro Ota
|
Toshihiro Ota, Ryo Karakida
|
Attention in a family of Boltzmann machines emerging from modern
Hopfield networks
|
15 pages, 3 figures. v2: added figures and various
corrections/improvements especially in Introduction and Section 3. Published
version
| null | null |
RIKEN-iTHEMS-Report-22
|
cs.LG cs.NE stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hopfield networks and Boltzmann machines (BMs) are fundamental energy-based
neural network models. Recent studies on modern Hopfield networks have broaden
the class of energy functions and led to a unified perspective on general
Hopfield networks including an attention module. In this letter, we consider
the BM counterparts of modern Hopfield networks using the associated energy
functions, and study their salient properties from a trainability perspective.
In particular, the energy function corresponding to the attention module
naturally introduces a novel BM, which we refer to as the attentional BM
(AttnBM). We verify that AttnBM has a tractable likelihood function and
gradient for certain special cases and is easy to train. Moreover, we reveal
the hidden connections between AttnBM and some single-layer models, namely the
Gaussian--Bernoulli restricted BM and the denoising autoencoder with softmax
units coming from denoising score matching. We also investigate BMs introduced
by other energy functions and show that the energy function of dense
associative memory models gives BMs belonging to Exponential Family Harmoniums.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 06:52:36 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Mar 2023 02:36:58 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Ota",
"Toshihiro",
""
],
[
"Karakida",
"Ryo",
""
]
] |
new_dataset
| 0.953952 |
2212.12324
|
Ilya Chugunov
|
Ilya Chugunov, Yuxuan Zhang, Felix Heide
|
Shakes on a Plane: Unsupervised Depth Estimation from Unstabilized
Photography
|
Project page: https://light.princeton.edu/publication/soap
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern mobile burst photography pipelines capture and merge a short sequence
of frames to recover an enhanced image, but often disregard the 3D nature of
the scene they capture, treating pixel motion between images as a 2D
aggregation problem. We show that in a ''long-burst'', forty-two 12-megapixel
RAW frames captured in a two-second sequence, there is enough parallax
information from natural hand tremor alone to recover high-quality scene depth.
To this end, we devise a test-time optimization approach that fits a neural
RGB-D representation to long-burst data and simultaneously estimates scene
depth and camera motion. Our plane plus depth model is trained end-to-end, and
performs coarse-to-fine refinement by controlling which multi-resolution volume
features the network has access to at what time during training. We validate
the method experimentally, and demonstrate geometrically accurate depth
reconstructions with no additional hardware or separate data pre-processing and
pose-estimation steps.
|
[
{
"version": "v1",
"created": "Thu, 22 Dec 2022 18:54:34 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2023 18:54:46 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Chugunov",
"Ilya",
""
],
[
"Zhang",
"Yuxuan",
""
],
[
"Heide",
"Felix",
""
]
] |
new_dataset
| 0.999222 |
2303.13351
|
Debayan Banerjee
|
Debayan Banerjee, Sushil Awale, Ricardo Usbeck, Chris Biemann
|
DBLP-QuAD: A Question Answering Dataset over the DBLP Scholarly
Knowledge Graph
|
12 pages ceur-ws 1 column accepted at International Bibliometric
Information Retrieval Workshp @ ECIR 2023
| null | null | null |
cs.DL cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this work we create a question answering dataset over the DBLP scholarly
knowledge graph (KG). DBLP is an on-line reference for bibliographic
information on major computer science publications that indexes over 4.4
million publications published by more than 2.2 million authors. Our dataset
consists of 10,000 question answer pairs with the corresponding SPARQL queries
which can be executed over the DBLP KG to fetch the correct answer. DBLP-QuAD
is the largest scholarly question answering dataset.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 15:29:21 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Mar 2023 09:47:57 GMT"
},
{
"version": "v3",
"created": "Wed, 29 Mar 2023 13:37:52 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Banerjee",
"Debayan",
""
],
[
"Awale",
"Sushil",
""
],
[
"Usbeck",
"Ricardo",
""
],
[
"Biemann",
"Chris",
""
]
] |
new_dataset
| 0.999201 |
2303.16254
|
Xinhang Liu
|
Xinhang Liu, Yan Zeng, Yifan Qin, Hao Li, Jiakai Zhang, Lan Xu, Jingyi
Yu
|
CryoFormer: Continuous Reconstruction of 3D Structures from Cryo-EM Data
using Transformer-based Neural Representations
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
High-resolution heterogeneous reconstruction of 3D structures of proteins and
other biomolecules using cryo-electron microscopy (cryo-EM) is essential for
understanding fundamental processes of life. However, it is still challenging
to reconstruct the continuous motions of 3D structures from hundreds of
thousands of noisy and randomly oriented 2D cryo-EM images. Existing methods
based on coordinate-based neural networks show compelling results to model
continuous conformations of 3D structures in the Fourier domain, but they
suffer from a limited ability to model local flexible regions and lack
interpretability. We propose a novel approach, cryoFormer, that utilizes a
transformer-based network architecture for continuous heterogeneous cryo-EM
reconstruction. We for the first time directly reconstruct continuous
conformations of 3D structures using an implicit feature volume in the 3D
spatial domain. A novel deformation transformer decoder further improves
reconstruction quality and, more importantly, locates and robustly tackles
flexible 3D regions caused by conformations. In experiments, our method
outperforms current approaches on three public datasets (1 synthetic and 2
experimental) and a new synthetic dataset of PEDV spike protein. The code and
new synthetic dataset will be released for better reproducibility of our
results. Project page: https://cryoformer.github.io.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 18:59:17 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Liu",
"Xinhang",
""
],
[
"Zeng",
"Yan",
""
],
[
"Qin",
"Yifan",
""
],
[
"Li",
"Hao",
""
],
[
"Zhang",
"Jiakai",
""
],
[
"Xu",
"Lan",
""
],
[
"Yu",
"Jingyi",
""
]
] |
new_dataset
| 0.999275 |
2303.16274
|
Sokratis Anagnostopoulos
|
Sokratis Anagnostopoulos, Jens Bauer, Mariana C. A. Clare, Matthew D.
Piggott
|
Accelerated wind farm yaw and layout optimisation with multi-fidelity
deep transfer learning wake models
|
16 Pages, 18 Figures, 3 Tables
| null | null | null |
cs.LG cs.CE physics.ao-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Wind farm modelling has been an area of rapidly increasing interest with
numerous analytical as well as computational-based approaches developed to
extend the margins of wind farm efficiency and maximise power production. In
this work, we present the novel ML framework WakeNet, which can reproduce
generalised 2D turbine wake velocity fields at hub-height over a wide range of
yaw angles, wind speeds and turbulence intensities (TIs), with a mean accuracy
of 99.8% compared to the solution calculated using the state-of-the-art wind
farm modelling software FLORIS. As the generation of sufficient high-fidelity
data for network training purposes can be cost-prohibitive, the utility of
multi-fidelity transfer learning has also been investigated. Specifically, a
network pre-trained on the low-fidelity Gaussian wake model is fine-tuned in
order to obtain accurate wake results for the mid-fidelity Curl wake model. The
robustness and overall performance of WakeNet on various wake steering control
and layout optimisation scenarios has been validated through power-gain
heatmaps, obtaining at least 90% of the power gained through optimisation
performed with FLORIS directly. We also demonstrate that when utilising the
Curl model, WakeNet is able to provide similar power gains to FLORIS, two
orders of magnitude faster (e.g. 10 minutes vs 36 hours per optimisation case).
The wake evaluation time of wakeNet when trained on a high-fidelity CFD dataset
is expected to be similar, thus further increasing computational time gains.
These promising results show that generalised wake modelling with ML tools can
be accurate enough to contribute towards active yaw and layout optimisation,
while producing realistic optimised configurations at a fraction of the
computational cost, hence making it feasible to perform real-time active yaw
control as well as robust optimisation under uncertainty.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 19:36:40 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Anagnostopoulos",
"Sokratis",
""
],
[
"Bauer",
"Jens",
""
],
[
"Clare",
"Mariana C. A.",
""
],
[
"Piggott",
"Matthew D.",
""
]
] |
new_dataset
| 0.99581 |
2303.16303
|
Zhengcheng Huang
|
Timothy M. Chan, Zhengcheng Huang
|
Constant-Hop Spanners for More Geometric Intersection Graphs, with Even
Smaller Size
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In SoCG 2022, Conroy and T\'oth presented several constructions of sparse,
low-hop spanners in geometric intersection graphs, including an $O(n\log
n)$-size 3-hop spanner for $n$ disks (or fat convex objects) in the plane, and
an $O(n\log^2 n)$-size 3-hop spanner for $n$ axis-aligned rectangles in the
plane. Their work left open two major questions: (i) can the size be made
closer to linear by allowing larger constant stretch? and (ii) can near-linear
size be achieved for more general classes of intersection graphs?
We address both questions simultaneously, by presenting new constructions of
constant-hop spanners that have almost linear size and that hold for a much
larger class of intersection graphs. More precisely, we prove the existence of
an $O(1)$-hop spanner for arbitrary string graphs with $O(n\alpha_k(n))$ size
for any constant $k$, where $\alpha_k(n)$ denotes the $k$-th function in the
inverse Ackermann hierarchy. We similarly prove the existence of an $O(1)$-hop
spanner for intersection graphs of $d$-dimensional fat objects with
$O(n\alpha_k(n))$ size for any constant $k$ and $d$.
We also improve on some of Conroy and T\'oth's specific previous results, in
either the number of hops or the size: we describe an $O(n\log n)$-size 2-hop
spanner for disks (or more generally objects with linear union complexity) in
the plane, and an $O(n\log n)$-size 3-hop spanner for axis-aligned rectangles
in the plane.
Our proofs are all simple, using separator theorems, recursion, shifted
quadtrees, and shallow cuttings.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 20:53:46 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Chan",
"Timothy M.",
""
],
[
"Huang",
"Zhengcheng",
""
]
] |
new_dataset
| 0.996876 |
2303.16344
|
Zixuan Feng
|
Zixuan Feng, Mariam Guizani, Marco A. Gerosa, Anita Sarma
|
The State of Diversity and Inclusion in Apache: A Pulse Check
|
11 pages, 1 figure
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diversity and inclusion in open source software (OSS) is a multifaceted
concept that arises from differences in contributors' gender, seniority,
language, region, and other characteristics. D&I has received growing attention
in OSS ecosystems and projects, and various programs have been implemented to
foster contributor diversity. However, we do not yet know how the state of D&I
is evolving. By understanding the state of D&I in OSS projects, the community
can develop new and adjust current strategies to foster diversity among
contributors and gain insights into the mechanisms and processes that
facilitate the development of inclusive communities. In this paper, we report
and compare the results of two surveys of Apache Software Foundation (ASF)
contributors conducted over two years (n=624 & n=432), considering a variety of
D&I aspects. We see improvements in engagement among those traditionally
underrepresented in OSS, particularly those who are in gender minority or not
confident in English. Yet, the gender gap in the number of contributors
remains. We expect this study to help communities tailor their efforts in
promoting D&I in OSS.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 22:49:50 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Feng",
"Zixuan",
""
],
[
"Guizani",
"Mariam",
""
],
[
"Gerosa",
"Marco A.",
""
],
[
"Sarma",
"Anita",
""
]
] |
new_dataset
| 0.978391 |
2303.16351
|
Yatish Kumar
|
Stacey Sheldon, Yatish Kumar, Michael Goodrich, Graham Heyes
|
EJ-FAT Joint ESnet JLab FPGA Accelerated Transport Load Balancer
|
Published at INDIS workshop at Supercomm 2022
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
To increase the science rate for high data rates/volumes, Thomas Jefferson
National Accelerator Facility (JLab) has partnered with Energy Sciences Network
(ESnet) to define an edge to data center traffic shaping / steering transport
capability featuring data event-aware network shaping and forwarding. The
keystone of this ESnet JLab FPGA Accelerated Transport (EJFAT) is the joint
development of a dynamic compute work Load Balancer (LB) of UDP streamed data.
The LB is a suite consisting of a Field Programmable Gate Array (FPGA)
executing the dynamically configurable, low fixed latency LB data plane
featuring real-time packet redirection at high throughput, and a control plane
running on the FPGA host computer that monitors network and compute farm
telemetry in order to make dynamic decisions for destination compute host
redirection / load balancing.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 23:15:16 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Sheldon",
"Stacey",
""
],
[
"Kumar",
"Yatish",
""
],
[
"Goodrich",
"Michael",
""
],
[
"Heyes",
"Graham",
""
]
] |
new_dataset
| 0.995707 |
2303.16382
|
Chaitanya Mitash
|
Chaitanya Mitash, Fan Wang, Shiyang Lu, Vikedo Terhuja, Tyler Garaas,
Felipe Polido, Manikantan Nambi
|
ARMBench: An Object-centric Benchmark Dataset for Robotic Manipulation
|
To appear at the IEEE Conference on Robotics and Automation (ICRA),
2023
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces Amazon Robotic Manipulation Benchmark (ARMBench), a
large-scale, object-centric benchmark dataset for robotic manipulation in the
context of a warehouse. Automation of operations in modern warehouses requires
a robotic manipulator to deal with a wide variety of objects, unstructured
storage, and dynamically changing inventory. Such settings pose challenges in
perceiving the identity, physical characteristics, and state of objects during
manipulation. Existing datasets for robotic manipulation consider a limited set
of objects or utilize 3D models to generate synthetic scenes with limitation in
capturing the variety of object properties, clutter, and interactions. We
present a large-scale dataset collected in an Amazon warehouse using a robotic
manipulator performing object singulation from containers with heterogeneous
contents. ARMBench contains images, videos, and metadata that corresponds to
235K+ pick-and-place activities on 190K+ unique objects. The data is captured
at different stages of manipulation, i.e., pre-pick, during transfer, and after
placement. Benchmark tasks are proposed by virtue of high-quality annotations
and baseline performance evaluation are presented on three visual perception
challenges, namely 1) object segmentation in clutter, 2) object identification,
and 3) defect detection. ARMBench can be accessed at http://armbench.com
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 01:42:54 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Mitash",
"Chaitanya",
""
],
[
"Wang",
"Fan",
""
],
[
"Lu",
"Shiyang",
""
],
[
"Terhuja",
"Vikedo",
""
],
[
"Garaas",
"Tyler",
""
],
[
"Polido",
"Felipe",
""
],
[
"Nambi",
"Manikantan",
""
]
] |
new_dataset
| 0.999713 |
2303.16450
|
Jinyoung Park
|
Jinyoung Park, Sanghyeok Lee, Sihyeon Kim, Yunyang Xiong, Hyunwoo J.
Kim
|
Self-positioning Point-based Transformer for Point Cloud Understanding
|
Accepted paper at CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformers have shown superior performance on various computer vision tasks
with their capabilities to capture long-range dependencies. Despite the
success, it is challenging to directly apply Transformers on point clouds due
to their quadratic cost in the number of points. In this paper, we present a
Self-Positioning point-based Transformer (SPoTr), which is designed to capture
both local and global shape contexts with reduced complexity. Specifically,
this architecture consists of local self-attention and self-positioning
point-based global cross-attention. The self-positioning points, adaptively
located based on the input shape, consider both spatial and semantic
information with disentangled attention to improve expressive power. With the
self-positioning points, we propose a novel global cross-attention mechanism
for point clouds, which improves the scalability of global self-attention by
allowing the attention module to compute attention weights with only a small
set of self-positioning points. Experiments show the effectiveness of SPoTr on
three point cloud tasks such as shape classification, part segmentation, and
scene segmentation. In particular, our proposed model achieves an accuracy gain
of 2.6% over the previous best models on shape classification with
ScanObjectNN. We also provide qualitative analyses to demonstrate the
interpretability of self-positioning points. The code of SPoTr is available at
https://github.com/mlvlab/SPoTr.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 04:27:11 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Park",
"Jinyoung",
""
],
[
"Lee",
"Sanghyeok",
""
],
[
"Kim",
"Sihyeon",
""
],
[
"Xiong",
"Yunyang",
""
],
[
"Kim",
"Hyunwoo J.",
""
]
] |
new_dataset
| 0.988308 |
2303.16452
|
Youhan Lee
|
Youhan Lee, Hasun Yu
|
ProtFIM: Fill-in-Middle Protein Sequence Design via Protein Language
Models
|
Preprint
| null | null | null |
cs.LG cs.AI q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Protein language models (pLMs), pre-trained via causal language modeling on
protein sequences, have been a promising tool for protein sequence design. In
real-world protein engineering, there are many cases where the amino acids in
the middle of a protein sequence are optimized while maintaining other
residues. Unfortunately, because of the left-to-right nature of pLMs, existing
pLMs modify suffix residues by prompting prefix residues, which are
insufficient for the infilling task that considers the whole surrounding
context. To find the more effective pLMs for protein engineering, we design a
new benchmark, Secondary structureE InFilling rEcoveRy, SEIFER, which
approximates infilling sequence design scenarios. With the evaluation of
existing models on the benchmark, we reveal the weakness of existing language
models and show that language models trained via fill-in-middle transformation,
called ProtFIM, are more appropriate for protein engineering. Also, we prove
that ProtFIM generates protein sequences with decent protein representations
through exhaustive experiments and visualizations.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 04:35:50 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Lee",
"Youhan",
""
],
[
"Yu",
"Hasun",
""
]
] |
new_dataset
| 0.99985 |
2303.16485
|
Tao Hu
|
Tao Hu, Xiaogang Xu, Ruihang Chu, Jiaya Jia
|
TriVol: Point Cloud Rendering via Triple Volumes
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Existing learning-based methods for point cloud rendering adopt various 3D
representations and feature querying mechanisms to alleviate the sparsity
problem of point clouds. However, artifacts still appear in rendered images,
due to the challenges in extracting continuous and discriminative 3D features
from point clouds. In this paper, we present a dense while lightweight 3D
representation, named TriVol, that can be combined with NeRF to render
photo-realistic images from point clouds. Our TriVol consists of triple slim
volumes, each of which is encoded from the point cloud. TriVol has two
advantages. First, it fuses respective fields at different scales and thus
extracts local and non-local features for discriminative representation.
Second, since the volume size is greatly reduced, our 3D decoder can be
efficiently inferred, allowing us to increase the resolution of the 3D space to
render more point details. Extensive experiments on different benchmarks with
varying kinds of scenes/objects demonstrate our framework's effectiveness
compared with current approaches. Moreover, our framework has excellent
generalization ability to render a category of scenes/objects without
fine-tuning.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 06:34:12 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Hu",
"Tao",
""
],
[
"Xu",
"Xiaogang",
""
],
[
"Chu",
"Ruihang",
""
],
[
"Jia",
"Jiaya",
""
]
] |
new_dataset
| 0.999128 |
2303.16528
|
Sebastian Neumaier
|
Lukas K\"onig and Sebastian Neumaier
|
Building a Knowledge Graph of Distributed Ledger Technologies
|
URI: https://w3id.org/DLTOntology
| null |
10.5281/zenodo.6497620
| null |
cs.CL cs.AI cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Distributed ledger systems have become more prominent and successful in
recent years, with a focus on blockchains and cryptocurrency. This has led to
various misunderstandings about both the technology itself and its
capabilities, as in many cases blockchain and cryptocurrency is used
synonymously and other applications are often overlooked. Therefore, as a
whole, the view of distributed ledger technology beyond blockchains and
cryptocurrencies is very limited. Existing vocabularies and ontologies often
focus on single aspects of the technology, or in some cases even just on one
product. This potentially leads to other types of distributed ledgers and their
possible use cases being neglected. In this paper, we present a knowledge graph
and an ontology for distributed ledger technologies, which includes security
considerations to model aspects such as threats and vulnerabilities,
application domains, as well as relevant standards and regulations. Such a
knowledge graph improves the overall understanding of distributed ledgers,
reveals their strengths, and supports the work of security personnel, i.e.
analysts and system architects. We discuss potential uses and follow semantic
web best practices to evaluate and publish the ontology and knowledge graph.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 08:34:01 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"König",
"Lukas",
""
],
[
"Neumaier",
"Sebastian",
""
]
] |
new_dataset
| 0.984678 |
2303.16531
|
Sergey Nesteruk
|
Igor Markov, Sergey Nesteruk, Andrey Kuznetsov, Denis Dimitrov
|
RusTitW: Russian Language Text Dataset for Visual Text in-the-Wild
Recognition
|
5 pages, 6 figures, 2 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Information surrounds people in modern life. Text is a very efficient type of
information that people use for communication for centuries. However, automated
text-in-the-wild recognition remains a challenging problem. The major
limitation for a DL system is the lack of training data. For the competitive
performance, training set must contain many samples that replicate the
real-world cases. While there are many high-quality datasets for English text
recognition; there are no available datasets for Russian language. In this
paper, we present a large-scale human-labeled dataset for Russian text
recognition in-the-wild. We also publish a synthetic dataset and code to
reproduce the generation process
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 08:38:55 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Markov",
"Igor",
""
],
[
"Nesteruk",
"Sergey",
""
],
[
"Kuznetsov",
"Andrey",
""
],
[
"Dimitrov",
"Denis",
""
]
] |
new_dataset
| 0.999876 |
2303.16621
|
Mahmoud Salhab
|
Mahmoud Salhab and Haidar Harmanani
|
AraSpot: Arabic Spoken Command Spotting
|
A preprint
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Spoken keyword spotting (KWS) is the task of identifying a keyword in an
audio stream and is widely used in smart devices at the edge in order to
activate voice assistants and perform hands-free tasks. The task is daunting as
there is a need, on the one hand, to achieve high accuracy while at the same
time ensuring that such systems continue to run efficiently on low power and
possibly limited computational capabilities devices. This work presents AraSpot
for Arabic keyword spotting trained on 40 Arabic keywords, using different
online data augmentation, and introducing ConformerGRU model architecture.
Finally, we further improve the performance of the model by training a
text-to-speech model for synthetic data generation. AraSpot achieved a
State-of-the-Art SOTA 99.59% result outperforming previous approaches.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 12:22:17 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Salhab",
"Mahmoud",
""
],
[
"Harmanani",
"Haidar",
""
]
] |
new_dataset
| 0.995399 |
2303.16631
|
Bo Zhou
|
Haiyan Guo, Bo Zhou, Bizhu Lin
|
On the $\alpha$-spectral radius of hypergraphs
| null | null | null | null |
cs.DM math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
For real $\alpha\in [0,1)$ and a hypergraph $G$, the $\alpha$-spectral radius
of $G$ is the largest eigenvalue of the matrix $A_{\alpha}(G)=\alpha
D(G)+(1-\alpha)A(G)$, where $A(G)$ is the adjacency matrix of $G$, which is a
symmetric matrix with zero diagonal such that for distinct vertices $u,v$ of
$G$, the $(u,v)$-entry of $A(G)$ is exactly the number of edges containing both
$u$ and $v$, and $D(G)$ is the diagonal matrix of row sums of $A(G)$. We study
the $\alpha$-spectral radius of a hypergraph that is uniform or not necessarily
uniform. We propose some local grafting operations that increase or decrease
the $\alpha$-spectral radius of a hypergraph. We determine the unique
hypergraphs with maximum $\alpha$-spectral radius among $k$-uniform hypertrees,
among $k$-uniform unicyclic hypergraphs, and among $k$-uniform hypergraphs with
fixed number of pendant edges. We also determine the unique hypertrees with
maximum $\alpha$-spectral radius among hypertrees with given number of vertices
and edges, the unique hypertrees with the first three largest (two smallest,
respectively) $\alpha$-spectral radii among hypertrees with given number of
vertices, the unique hypertrees with minimum $\alpha$-spectral radius among the
hypertrees that are not $2$-uniform, the unique hypergraphs with the first two
largest (smallest, respectively) $\alpha$-spectral radii among unicyclic
hypergraphs with given number of vertices, and the unique hypergraphs with
maximum $\alpha$-spectral radius among hypergraphs with fixed number of pendant
edges.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 12:38:00 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Guo",
"Haiyan",
""
],
[
"Zhou",
"Bo",
""
],
[
"Lin",
"Bizhu",
""
]
] |
new_dataset
| 0.95227 |
2303.16729
|
Minjia Shi
|
Minjia Shi, Shitao Li, Tor Helleseth, Jon-Lark Kim
|
Binary self-orthogonal codes which meet the Griesmer bound or have
optimal minimum distances
|
Submitted 20 January, 2023
| null | null | null |
cs.IT cs.CR math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The purpose of this paper is two-fold. First, we characterize the existence
of binary self-orthogonal codes meeting the Griesmer bound by employing
Solomon-Stiffler codes and some related residual codes. Second, using such a
characterization, we determine the exact value of $d_{so}(n,7)$ except for five
special cases and the exact value of $d_{so}(n,8)$ except for 41 special cases,
where $d_{so}(n,k)$ denotes the largest minimum distance among all binary
self-orthogonal $[n, k]$ codes. Currently, the exact value of $d_{so}(n,k)$ $(k
\le 6)$ was determined by Shi et al. (2022). In addition, we develop a general
method to prove the nonexistence of some binary self-orthogonal codes by
considering the residual code of a binary self-orthogonal code.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 14:34:27 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Shi",
"Minjia",
""
],
[
"Li",
"Shitao",
""
],
[
"Helleseth",
"Tor",
""
],
[
"Kim",
"Jon-Lark",
""
]
] |
new_dataset
| 0.973628 |
2303.16750
|
Ivan Stelmakh
|
Ivan Stelmakh, John Wieting, Graham Neubig, Nihar B. Shah
|
A Gold Standard Dataset for the Reviewer Assignment Problem
| null | null | null | null |
cs.IR cs.DL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many peer-review venues are either using or looking to use algorithms to
assign submissions to reviewers. The crux of such automated approaches is the
notion of the "similarity score"--a numerical estimate of the expertise of a
reviewer in reviewing a paper--and many algorithms have been proposed to
compute these scores. However, these algorithms have not been subjected to a
principled comparison, making it difficult for stakeholders to choose the
algorithm in an evidence-based manner. The key challenge in comparing existing
algorithms and developing better algorithms is the lack of the publicly
available gold-standard data that would be needed to perform reproducible
research. We address this challenge by collecting a novel dataset of similarity
scores that we release to the research community. Our dataset consists of 477
self-reported expertise scores provided by 58 researchers who evaluated their
expertise in reviewing papers they have read previously.
We use this data to compare several popular algorithms employed in computer
science conferences and come up with recommendations for stakeholders. Our main
findings are as follows. First, all algorithms make a non-trivial amount of
error. For the task of ordering two papers in terms of their relevance for a
reviewer, the error rates range from 12%-30% in easy cases to 36%-43% in hard
cases, highlighting the vital need for more research on the
similarity-computation problem. Second, most existing algorithms are designed
to work with titles and abstracts of papers, and in this regime the Specter+MFR
algorithm performs best. Third, to improve performance, it may be important to
develop modern deep-learning based algorithms that can make use of the full
texts of papers: the classical TD-IDF algorithm enhanced with full texts of
papers is on par with the deep-learning based Specter+MFR that cannot make use
of this information.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 16:15:03 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Stelmakh",
"Ivan",
""
],
[
"Wieting",
"John",
""
],
[
"Neubig",
"Graham",
""
],
[
"Shah",
"Nihar B.",
""
]
] |
new_dataset
| 0.960358 |
2303.16780
|
Bradford Windsor
|
Brad Windsor, Kevin Choi
|
Thistle: A Vector Database in Rust
| null | null | null | null |
cs.IR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We present Thistle, a fully functional vector database. Thistle is an entry
into the domain of latent knowledge use in answering search queries, an ongoing
research topic at both start-ups and search engine companies. We implement
Thistle with several well-known algorithms, and benchmark results on the MS
MARCO dataset. Results help clarify the latent knowledge domain as well as the
growing Rust ML ecosystem.
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 23:56:23 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Windsor",
"Brad",
""
],
[
"Choi",
"Kevin",
""
]
] |
new_dataset
| 0.999688 |
2303.16805
|
Max Pascher
|
Max Pascher, Til Franzen, Kirill Kronhardt, Uwe Gruenefeld, Stefan
Schneegass, Jens Gerken
|
HaptiX: Vibrotactile Haptic Feedback for Communication of 3D Directional
Cues
|
CHI EA '23, April 23-28, 2023, Hamburg, Germany
| null |
10.1145/3544549.3585601
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
In Human-Computer-Interaction, vibrotactile haptic feedback offers the
advantage of being independent of any visual perception of the environment.
Most importantly, the user's field of view is not obscured by user interface
elements, and the visual sense is not unnecessarily strained. This is
especially advantageous when the visual channel is already busy, or the visual
sense is limited. We developed three design variants based on different
vibrotactile illusions to communicate 3D directional cues. In particular, we
explored two variants based on the vibrotactile illusion of the cutaneous
rabbit and one based on apparent vibrotactile motion. To communicate gradient
information, we combined these with pulse-based and intensity-based mapping. A
subsequent study showed that the pulse-based variants based on the vibrotactile
illusion of the cutaneous rabbit are suitable for communicating both
directional and gradient characteristics. The results further show that a
representation of 3D directions via vibrations can be effective and beneficial.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 15:48:21 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Pascher",
"Max",
""
],
[
"Franzen",
"Til",
""
],
[
"Kronhardt",
"Kirill",
""
],
[
"Gruenefeld",
"Uwe",
""
],
[
"Schneegass",
"Stefan",
""
],
[
"Gerken",
"Jens",
""
]
] |
new_dataset
| 0.999542 |
2303.16867
|
Sarah Ostadabbas
|
Shaotong Zhu, Michael Wan, Elaheh Hatamimajoumerd, Kashish Jain,
Samuel Zlota, Cholpady Vikram Kamath, Cassandra B. Rowan, Emma C. Grace,
Matthew S. Goodwin, Marie J. Hayes, Rebecca A. Schwartz-Mette, Emily
Zimmerman, Sarah Ostadabbas
|
A Video-based End-to-end Pipeline for Non-nutritive Sucking Action
Recognition and Segmentation in Young Infants
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present an end-to-end computer vision pipeline to detect non-nutritive
sucking (NNS) -- an infant sucking pattern with no nutrition delivered -- as a
potential biomarker for developmental delays, using off-the-shelf baby monitor
video footage. One barrier to clinical (or algorithmic) assessment of NNS stems
from its sparsity, requiring experts to wade through hours of footage to find
minutes of relevant activity. Our NNS activity segmentation algorithm solves
this problem by identifying periods of NNS with high certainty -- up to 94.0\%
average precision and 84.9\% average recall across 30 heterogeneous 60 s clips,
drawn from our manually annotated NNS clinical in-crib dataset of 183 hours of
overnight baby monitor footage from 19 infants. Our method is based on an
underlying NNS action recognition algorithm, which uses spatiotemporal deep
learning networks and infant-specific pose estimation, achieving 94.9\%
accuracy in binary classification of 960 2.5 s balanced NNS vs. non-NNS clips.
Tested on our second, independent, and public NNS in-the-wild dataset, NNS
recognition classification reaches 92.3\% accuracy, and NNS segmentation
achieves 90.8\% precision and 84.2\% recall.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 17:24:21 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Zhu",
"Shaotong",
""
],
[
"Wan",
"Michael",
""
],
[
"Hatamimajoumerd",
"Elaheh",
""
],
[
"Jain",
"Kashish",
""
],
[
"Zlota",
"Samuel",
""
],
[
"Kamath",
"Cholpady Vikram",
""
],
[
"Rowan",
"Cassandra B.",
""
],
[
"Grace",
"Emma C.",
""
],
[
"Goodwin",
"Matthew S.",
""
],
[
"Hayes",
"Marie J.",
""
],
[
"Schwartz-Mette",
"Rebecca A.",
""
],
[
"Zimmerman",
"Emily",
""
],
[
"Ostadabbas",
"Sarah",
""
]
] |
new_dataset
| 0.979342 |
2303.16899
|
Tengda Han
|
Tengda Han, Max Bain, Arsha Nagrani, G\"ul Varol, Weidi Xie, Andrew
Zisserman
|
AutoAD: Movie Description in Context
|
CVPR2023 Highlight. Project page:
https://www.robots.ox.ac.uk/~vgg/research/autoad/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The objective of this paper is an automatic Audio Description (AD) model that
ingests movies and outputs AD in text form. Generating high-quality movie AD is
challenging due to the dependency of the descriptions on context, and the
limited amount of training data available. In this work, we leverage the power
of pretrained foundation models, such as GPT and CLIP, and only train a mapping
network that bridges the two models for visually-conditioned text generation.
In order to obtain high-quality AD, we make the following four contributions:
(i) we incorporate context from the movie clip, AD from previous clips, as well
as the subtitles; (ii) we address the lack of training data by pretraining on
large-scale datasets, where visual or contextual information is unavailable,
e.g. text-only AD without movies or visual captioning datasets without context;
(iii) we improve on the currently available AD datasets, by removing label
noise in the MAD dataset, and adding character naming information; and (iv) we
obtain strong results on the movie AD task compared with previous methods.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 17:59:58 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Han",
"Tengda",
""
],
[
"Bain",
"Max",
""
],
[
"Nagrani",
"Arsha",
""
],
[
"Varol",
"Gül",
""
],
[
"Xie",
"Weidi",
""
],
[
"Zisserman",
"Andrew",
""
]
] |
new_dataset
| 0.998653 |
2303.16900
|
Weihao Yu
|
Weihao Yu, Pan Zhou, Shuicheng Yan, Xinchao Wang
|
InceptionNeXt: When Inception Meets ConvNeXt
|
Code: https://github.com/sail-sg/inceptionnext
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inspired by the long-range modeling ability of ViTs, large-kernel
convolutions are widely studied and adopted recently to enlarge the receptive
field and improve model performance, like the remarkable work ConvNeXt which
employs 7x7 depthwise convolution. Although such depthwise operator only
consumes a few FLOPs, it largely harms the model efficiency on powerful
computing devices due to the high memory access costs. For example, ConvNeXt-T
has similar FLOPs with ResNet-50 but only achieves 60% throughputs when trained
on A100 GPUs with full precision. Although reducing the kernel size of ConvNeXt
can improve speed, it results in significant performance degradation. It is
still unclear how to speed up large-kernel-based CNN models while preserving
their performance. To tackle this issue, inspired by Inceptions, we propose to
decompose large-kernel depthwise convolution into four parallel branches along
channel dimension, i.e. small square kernel, two orthogonal band kernels, and
an identity mapping. With this new Inception depthwise convolution, we build a
series of networks, namely IncepitonNeXt, which not only enjoy high throughputs
but also maintain competitive performance. For instance, InceptionNeXt-T
achieves 1.6x higher training throughputs than ConvNeX-T, as well as attains
0.2% top-1 accuracy improvement on ImageNet-1K. We anticipate InceptionNeXt can
serve as an economical baseline for future architecture design to reduce carbon
footprint. Code is available at https://github.com/sail-sg/inceptionnext.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 17:59:58 GMT"
}
] | 2023-03-30T00:00:00 |
[
[
"Yu",
"Weihao",
""
],
[
"Zhou",
"Pan",
""
],
[
"Yan",
"Shuicheng",
""
],
[
"Wang",
"Xinchao",
""
]
] |
new_dataset
| 0.97928 |
1912.08277
|
Michel de Rougemont
|
Richard Lassaigne and Michel de Rougemont
|
Testing Membership for Timed Automata
|
26 pages
| null | null | null |
cs.LO cs.CC cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a timed automata which admits thick components and a timed word $x$, we
present a tester which decides if $x$ is in the language of the automaton or if
$x$ is $\epsilon$-far from the language, using finitely many samples taken from
the weighted time distribution $\mu$ associated with an input $x$. We introduce
a distance between timed words, the {\em timed edit distance}, which
generalizes the classical edit distance. A timed word $x$ is $\epsilon$-far
from a timed language if its relative distance to the language is greater than
$\epsilon$.
|
[
{
"version": "v1",
"created": "Tue, 17 Dec 2019 21:24:41 GMT"
},
{
"version": "v2",
"created": "Thu, 28 May 2020 15:55:15 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Mar 2023 10:11:35 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Lassaigne",
"Richard",
""
],
[
"de Rougemont",
"Michel",
""
]
] |
new_dataset
| 0.991942 |
2006.05277
|
Yevheniya Nosyk
|
Yevheniya Nosyk, Maciej Korczy\'nski, Qasim Lone, Marcin Skwarek,
Baptiste Jonglez and Andrzej Duda
|
The Closed Resolver Project: Measuring the Deployment of Source Address
Validation of Inbound Traffic
| null |
IEEE/ACM Transactions on Networking (2023)
|
10.1109/TNET.2023.3257413
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Source Address Validation (SAV) is a standard aimed at discarding packets
with spoofed source IP addresses. The absence of SAV for outgoing traffic has
been known as a root cause of Distributed Denial-of-Service (DDoS) attacks and
received widespread attention. While less obvious, the absence of inbound
filtering enables an attacker to appear as an internal host of a network and
may reveal valuable information about the network infrastructure. Inbound IP
spoofing may amplify other attack vectors such as DNS cache poisoning or the
recently discovered NXNSAttack. In this paper, we present the preliminary
results of the Closed Resolver Project that aims at mitigating the problem of
inbound IP spoofing. We perform the first Internet-wide active measurement
study to enumerate networks that filter or do not filter incoming packets by
their source address, for both the IPv4 and IPv6 address spaces. To achieve
this, we identify closed and open DNS resolvers that accept spoofed requests
coming from the outside of their network. The proposed method provides the most
complete picture of inbound SAV deployment by network providers. Our
measurements cover over 55 % IPv4 and 27 % IPv6 Autonomous Systems (AS) and
reveal that the great majority of them are fully or partially vulnerable to
inbound spoofing. By identifying dual-stacked DNS resolvers, we additionally
show that inbound filtering is less often deployed for IPv6 than it is for
IPv4. Overall, we discover 13.9 K IPv6 open resolvers that can be exploited for
amplification DDoS attacks - 13 times more than previous work. Furthermore, we
enumerate uncover 4.25 M IPv4 and 103 K IPv6 vulnerable closed resolvers that
could only be detected thanks to our spoofing technique, and that pose a
significant threat when combined with the NXNSAttack.
|
[
{
"version": "v1",
"created": "Tue, 9 Jun 2020 14:07:58 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 11:47:14 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Nosyk",
"Yevheniya",
""
],
[
"Korczyński",
"Maciej",
""
],
[
"Lone",
"Qasim",
""
],
[
"Skwarek",
"Marcin",
""
],
[
"Jonglez",
"Baptiste",
""
],
[
"Duda",
"Andrzej",
""
]
] |
new_dataset
| 0.989865 |
2112.11479
|
Geet Shingi
|
Geet Shingi, Vedangi Wagh, Kishor Wagh, Sharmila Wagh
|
AtteSTNet -- An attention and subword tokenization based approach for
code-switched text hate speech detection
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent advancements in technology have led to a boost in social media usage
which has ultimately led to large amounts of user-generated data which also
includes hateful and offensive speech. The language used in social media is
often a combination of English and the native language in the region. In India,
Hindi is used predominantly and is often code-switched with English, giving
rise to the Hinglish (Hindi+English) language. Various approaches have been
made in the past to classify the code-mixed Hinglish hate speech using
different machine learning and deep learning-based techniques. However, these
techniques make use of recurrence on convolution mechanisms which are
computationally expensive and have high memory requirements. Past techniques
also make use of complex data processing making the existing techniques very
complex and non-sustainable to change in data. Proposed work gives a much
simpler approach which is not only at par with these complex networks but also
exceeds performance with the use of subword tokenization algorithms like BPE
and Unigram, along with multi-head attention-based techniques, giving an
accuracy of 87.41% and an F1 score of 0.851 on standard datasets. Efficient use
of BPE and Unigram algorithms help handle the nonconventional Hinglish
vocabulary making the proposed technique simple, efficient and sustainable to
use in the real world.
|
[
{
"version": "v1",
"created": "Fri, 10 Dec 2021 20:01:44 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Aug 2022 09:55:41 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Mar 2023 08:38:01 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Shingi",
"Geet",
""
],
[
"Wagh",
"Vedangi",
""
],
[
"Wagh",
"Kishor",
""
],
[
"Wagh",
"Sharmila",
""
]
] |
new_dataset
| 0.99972 |
2203.06111
|
Oleg Voynov
|
Oleg Voynov, Gleb Bobrovskikh, Pavel Karpyshev, Saveliy Galochkin,
Andrei-Timotei Ardelean, Arseniy Bozhenko, Ekaterina Karmanova, Pavel
Kopanev, Yaroslav Labutin-Rymsho, Ruslan Rakhimov, Aleksandr Safin, Valerii
Serpiva, Alexey Artemov, Evgeny Burnaev, Dzmitry Tsetserukou, Denis Zorin
|
Multi-sensor large-scale dataset for multi-view 3D reconstruction
|
v4: final camera-ready version
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new multi-sensor dataset for multi-view 3D surface
reconstruction. It includes registered RGB and depth data from sensors of
different resolutions and modalities: smartphones, Intel RealSense, Microsoft
Kinect, industrial cameras, and structured-light scanner. The scenes are
selected to emphasize a diverse set of material properties challenging for
existing algorithms. We provide around 1.4 million images of 107 different
scenes acquired from 100 viewing directions under 14 lighting conditions. We
expect our dataset will be useful for evaluation and training of 3D
reconstruction algorithms and for related tasks. The dataset is available at
skoltech3d.appliedai.tech.
|
[
{
"version": "v1",
"created": "Fri, 11 Mar 2022 17:32:27 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Jan 2023 19:00:08 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Feb 2023 10:33:55 GMT"
},
{
"version": "v4",
"created": "Tue, 28 Mar 2023 11:11:08 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Voynov",
"Oleg",
""
],
[
"Bobrovskikh",
"Gleb",
""
],
[
"Karpyshev",
"Pavel",
""
],
[
"Galochkin",
"Saveliy",
""
],
[
"Ardelean",
"Andrei-Timotei",
""
],
[
"Bozhenko",
"Arseniy",
""
],
[
"Karmanova",
"Ekaterina",
""
],
[
"Kopanev",
"Pavel",
""
],
[
"Labutin-Rymsho",
"Yaroslav",
""
],
[
"Rakhimov",
"Ruslan",
""
],
[
"Safin",
"Aleksandr",
""
],
[
"Serpiva",
"Valerii",
""
],
[
"Artemov",
"Alexey",
""
],
[
"Burnaev",
"Evgeny",
""
],
[
"Tsetserukou",
"Dzmitry",
""
],
[
"Zorin",
"Denis",
""
]
] |
new_dataset
| 0.999765 |
2207.07621
|
Nikita Drobyshev
|
Nikita Drobyshev, Jenya Chelishev, Taras Khakhulin, Aleksei
Ivakhnenko, Victor Lempitsky and Egor Zakharov
|
MegaPortraits: One-shot Megapixel Neural Head Avatars
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this work, we advance the neural head avatar technology to the megapixel
resolution while focusing on the particularly challenging task of cross-driving
synthesis, i.e., when the appearance of the driving image is substantially
different from the animated source image. We propose a set of new neural
architectures and training methods that can leverage both medium-resolution
video data and high-resolution image data to achieve the desired levels of
rendered image quality and generalization to novel views and motion. We
demonstrate that suggested architectures and methods produce convincing
high-resolution neural avatars, outperforming the competitors in the
cross-driving scenario. Lastly, we show how a trained high-resolution neural
avatar model can be distilled into a lightweight student model which runs in
real-time and locks the identities of neural avatars to several dozens of
pre-defined source images. Real-time operation and identity lock are essential
for many practical applications head avatar systems.
|
[
{
"version": "v1",
"created": "Fri, 15 Jul 2022 17:32:37 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Mar 2023 10:58:12 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Drobyshev",
"Nikita",
""
],
[
"Chelishev",
"Jenya",
""
],
[
"Khakhulin",
"Taras",
""
],
[
"Ivakhnenko",
"Aleksei",
""
],
[
"Lempitsky",
"Victor",
""
],
[
"Zakharov",
"Egor",
""
]
] |
new_dataset
| 0.999403 |
2208.00710
|
Panagiotis Papadopoulos
|
Paschalis Bekos, Panagiotis Papadopoulos, Evangelos P. Markatos,
Nicolas Kourtellis
|
The Hitchhiker's Guide to Facebook Web Tracking with Invisible Pixels
and Click IDs
| null |
In Proceedings of the ACM Web Conference 2023 (WWW '23)
|
10.1145/3543507.3583311
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Over the past years, advertisement companies have used various tracking
methods to persistently track users across the web. Such tracking methods
usually include first and third-party cookies, cookie synchronization, as well
as a variety of fingerprinting mechanisms. Facebook (FB) recently introduced a
new tagging mechanism that attaches a one-time tag as a URL parameter (FBCLID)
on outgoing links to other websites. Although such a tag does not seem to have
enough information to persistently track users, we demonstrate that despite its
ephemeral nature, when combined with FB Pixel, it can aid in persistently
monitoring user browsing behavior across i) different websites, ii) different
actions on each website, iii) time, i.e., both in the past as well as in the
future. We refer to this online monitoring of users as FB web tracking. We find
that FB Pixel tracks a wide range of user activities on websites with alarming
detail, especially on websites classified as sensitive categories under GDPR.
Also, we show how the FBCLID tag can be used to match, and thus de-anonymize,
activities of online users performed in the distant past (even before those
users had a FB account) tracked by FB Pixel. In fact, by combining this tag
with cookies that have rolling expiration dates, FB can also keep track of
users' browsing activities in the future as well. Our experimental results
suggest that 23% of the 10k most popular websites have adopted this technology,
and can contribute to this activity tracking on the web. Furthermore, our
longitudinal study shows that this type of user activity tracking can go as far
back as 2015. Simply said, if a user creates for the first time a FB account
today, FB could, under some conditions, match their anonymously collected past
web browsing activity to their newly created FB profile, from as far back as
2015 and continue tracking their activity in the future.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 09:45:28 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Mar 2023 23:14:11 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Mar 2023 09:42:24 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Bekos",
"Paschalis",
""
],
[
"Papadopoulos",
"Panagiotis",
""
],
[
"Markatos",
"Evangelos P.",
""
],
[
"Kourtellis",
"Nicolas",
""
]
] |
new_dataset
| 0.997253 |
2209.09553
|
Reza Akbari
|
Reza Sepahvand, Reza Akbari, Behnaz Jamasb, Sattar Hashemi, Omid
Boushehrian
|
Using Word Embedding and Convolution Neural Network for Bug Triaging by
Considering Design Flaws
| null | null |
10.1016/j.scico.2023.102945
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Resolving bugs in the maintenance phase of software is a complicated task.
Bug assignment is one of the main tasks for resolving bugs. Some Bugs cannot be
fixed properly without making design decisions and have to be assigned to
designers, rather than programmers, to avoid emerging bad smells that may cause
subsequent bug reports. Hence, it is important to refer some bugs to the
designer to check the possible design flaws. Based on our best knowledge, there
are a few works that have considered referring bugs to designers. Hence, this
issue is considered in this work. In this paper, a dataset is created, and a
CNN-based model is proposed to predict the need for assigning a bug to a
designer by learning the peculiarities of bug reports effective in creating bad
smells in the code. The features of each bug are extracted from CNN based on
its textual features, such as a summary and description. The number of bad
samples added to it in the fixing process using the PMD tool determines the bug
tag. The summary and description of the new bug are given to the model and the
model predicts the need to refer to the designer. The accuracy of 75% (or more)
was achieved for datasets with a sufficient number of samples for deep
learning-based model training. A model is proposed to predict bug referrals to
the designer. The efficiency of the model in predicting referrals to the
designer at the time of receiving the bug report was demonstrated by testing
the model on 10 projects.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 08:43:40 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Sepahvand",
"Reza",
""
],
[
"Akbari",
"Reza",
""
],
[
"Jamasb",
"Behnaz",
""
],
[
"Hashemi",
"Sattar",
""
],
[
"Boushehrian",
"Omid",
""
]
] |
new_dataset
| 0.993719 |
2209.09699
|
Niclas V\"odisch
|
Jos\'e Arce, Niclas V\"odisch, Daniele Cattaneo, Wolfram Burgard,
Abhinav Valada
|
PADLoC: LiDAR-Based Deep Loop Closure Detection and Registration Using
Panoptic Attention
| null |
IEEE Robotics and Automation Letters, vol. 8, no. 3, pp.
1319-1326, March 2023
|
10.1109/LRA.2023.3239312
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A key component of graph-based SLAM systems is the ability to detect loop
closures in a trajectory to reduce the drift accumulated over time from the
odometry. Most LiDAR-based methods achieve this goal by using only the
geometric information, disregarding the semantics of the scene. In this work,
we introduce PADLoC for joint loop closure detection and registration in
LiDAR-based SLAM frameworks. We propose a novel transformer-based head for
point cloud matching and registration, and to leverage panoptic information
during training time. In particular, we propose a novel loss function that
reframes the matching problem as a classification task for the semantic labels
and as a graph connectivity assignment for the instance labels. During
inference, PADLoC does not require panoptic annotations, making it more
versatile than other methods. Additionally, we show that using two shared
matching and registration heads with their source and target inputs swapped
increases the overall performance by enforcing forward-backward consistency. We
perform extensive evaluations of PADLoC on multiple real-world datasets
demonstrating that it achieves state-of-the-art results. The code of our work
is publicly available at http://padloc.cs.uni-freiburg.de.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 13:07:49 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Jan 2023 10:17:55 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Mar 2023 15:26:15 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Arce",
"José",
""
],
[
"Vödisch",
"Niclas",
""
],
[
"Cattaneo",
"Daniele",
""
],
[
"Burgard",
"Wolfram",
""
],
[
"Valada",
"Abhinav",
""
]
] |
new_dataset
| 0.998606 |
2210.01033
|
Bowen Dong
|
Bowen Dong, Pan Zhou, Shuicheng Yan, Wangmeng Zuo
|
LPT: Long-tailed Prompt Tuning for Image Classification
|
ICLR 2023 (poster)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
For long-tailed classification, most works often pretrain a big model on a
large-scale dataset, and then fine-tune the whole model for adapting to
long-tailed data. Though promising, fine-tuning the whole pretrained model
tends to suffer from high cost in computation and deployment of different
models for different tasks, as well as weakened generalization ability for
overfitting to certain features of long-tailed data. To alleviate these issues,
we propose an effective Long-tailed Prompt Tuning method for long-tailed
classification. LPT introduces several trainable prompts into a frozen
pretrained model to adapt it to long-tailed data. For better effectiveness, we
divide prompts into two groups: 1) a shared prompt for the whole long-tailed
dataset to learn general features and to adapt a pretrained model into target
domain; and 2) group-specific prompts to gather group-specific features for the
samples which have similar features and also to empower the pretrained model
with discrimination ability. Then we design a two-phase training paradigm to
learn these prompts. In phase 1, we train the shared prompt via supervised
prompt tuning to adapt a pretrained model to the desired long-tailed domain. In
phase 2, we use the learnt shared prompt as query to select a small best
matched set for a group of similar samples from the group-specific prompt set
to dig the common features of these similar samples, then optimize these
prompts with dual sampling strategy and asymmetric GCL loss. By only
fine-tuning a few prompts while fixing the pretrained model, LPT can reduce
training and deployment cost by storing a few prompts, and enjoys a strong
generalization ability of the pretrained model. Experiments show that on
various long-tailed benchmarks, with only ~1.1% extra parameters, LPT achieves
comparable performance than previous whole model fine-tuning methods, and is
more robust to domain-shift.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 15:47:02 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Mar 2023 10:16:03 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Dong",
"Bowen",
""
],
[
"Zhou",
"Pan",
""
],
[
"Yan",
"Shuicheng",
""
],
[
"Zuo",
"Wangmeng",
""
]
] |
new_dataset
| 0.994253 |
2210.01612
|
Ruoyu Wang
|
Ruoyu Wang, Zehao Yu and Shenghua Gao
|
PlaneDepth: Self-supervised Depth Estimation via Orthogonal Planes
|
Accepted by CVPR 2023. Code and models are available at:
https://github.com/svip-lab/PlaneDepth
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiple near frontal-parallel planes based depth representation demonstrated
impressive results in self-supervised monocular depth estimation (MDE).
Whereas, such a representation would cause the discontinuity of the ground as
it is perpendicular to the frontal-parallel planes, which is detrimental to the
identification of drivable space in autonomous driving. In this paper, we
propose the PlaneDepth, a novel orthogonal planes based presentation, including
vertical planes and ground planes. PlaneDepth estimates the depth distribution
using a Laplacian Mixture Model based on orthogonal planes for an input image.
These planes are used to synthesize a reference view to provide the
self-supervision signal. Further, we find that the widely used resizing and
cropping data augmentation breaks the orthogonality assumptions, leading to
inferior plane predictions. We address this problem by explicitly constructing
the resizing cropping transformation to rectify the predefined planes and
predicted camera pose. Moreover, we propose an augmented self-distillation loss
supervised with a bilateral occlusion mask to boost the robustness of
orthogonal planes representation for occlusions. Thanks to our orthogonal
planes representation, we can extract the ground plane in an unsupervised
manner, which is important for autonomous driving. Extensive experiments on the
KITTI dataset demonstrate the effectiveness and efficiency of our method. The
code is available at https://github.com/svip-lab/PlaneDepth.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 13:51:59 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Mar 2023 10:01:33 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Mar 2023 05:06:59 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Wang",
"Ruoyu",
""
],
[
"Yu",
"Zehao",
""
],
[
"Gao",
"Shenghua",
""
]
] |
new_dataset
| 0.999041 |
2210.06207
|
Abdullatif K\"oksal
|
Abdullatif K\"oksal, Silvia Severini, Hinrich Sch\"utze
|
SilverAlign: MT-Based Silver Data Algorithm For Evaluating Word
Alignment
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Word alignments are essential for a variety of NLP tasks. Therefore, choosing
the best approaches for their creation is crucial. However, the scarce
availability of gold evaluation data makes the choice difficult. We propose
SilverAlign, a new method to automatically create silver data for the
evaluation of word aligners by exploiting machine translation and minimal
pairs. We show that performance on our silver data correlates well with gold
benchmarks for 9 language pairs, making our approach a valid resource for
evaluation of different domains and languages when gold data are not available.
This addresses the important scenario of missing gold data alignments for
low-resource languages.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 13:48:59 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2023 22:00:44 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Köksal",
"Abdullatif",
""
],
[
"Severini",
"Silvia",
""
],
[
"Schütze",
"Hinrich",
""
]
] |
new_dataset
| 0.99818 |
2212.04247
|
Chengwei Zheng
|
Chengwei Zheng, Wenbin Lin, Feng Xu
|
EditableNeRF: Editing Topologically Varying Neural Radiance Fields by
Key Points
|
Accepted by CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural radiance fields (NeRF) achieve highly photo-realistic novel-view
synthesis, but it's a challenging problem to edit the scenes modeled by
NeRF-based methods, especially for dynamic scenes. We propose editable neural
radiance fields that enable end-users to easily edit dynamic scenes and even
support topological changes. Input with an image sequence from a single camera,
our network is trained fully automatically and models topologically varying
dynamics using our picked-out surface key points. Then end-users can edit the
scene by easily dragging the key points to desired new positions. To achieve
this, we propose a scene analysis method to detect and initialize key points by
considering the dynamics in the scene, and a weighted key points strategy to
model topologically varying dynamics by joint key points and weights
optimization. Our method supports intuitive multi-dimensional (up to 3D)
editing and can generate novel scenes that are unseen in the input sequence.
Experiments demonstrate that our method achieves high-quality editing on
various dynamic scenes and outperforms the state-of-the-art. Our code and
captured data are available at https://chengwei-zheng.github.io/EditableNeRF/.
|
[
{
"version": "v1",
"created": "Wed, 7 Dec 2022 06:08:03 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Mar 2023 05:14:33 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Zheng",
"Chengwei",
""
],
[
"Lin",
"Wenbin",
""
],
[
"Xu",
"Feng",
""
]
] |
new_dataset
| 0.984499 |
2212.06200
|
Pavel Tokmakov
|
Pavel Tokmakov, Jie Li, Adrien Gaidon
|
Breaking the "Object" in Video Object Segmentation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The appearance of an object can be fleeting when it transforms. As eggs are
broken or paper is torn, their color, shape and texture can change
dramatically, preserving virtually nothing of the original except for the
identity itself. Yet, this important phenomenon is largely absent from existing
video object segmentation (VOS) benchmarks. In this work, we close the gap by
collecting a new dataset for Video Object Segmentation under Transformations
(VOST). It consists of more than 700 high-resolution videos, captured in
diverse environments, which are 21 seconds long on average and densely labeled
with instance masks. A careful, multi-step approach is adopted to ensure that
these videos focus on complex object transformations, capturing their full
temporal extent. We then extensively evaluate state-of-the-art VOS methods and
make a number of important discoveries. In particular, we show that existing
methods struggle when applied to this novel task and that their main limitation
lies in over-reliance on static appearance cues. This motivates us to propose a
few modifications for the top-performing baseline that improve its capabilities
by better modeling spatio-temporal information. But more broadly, the hope is
to stimulate discussion on learning more robust video object representations.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 19:22:17 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Mar 2023 16:51:28 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Tokmakov",
"Pavel",
""
],
[
"Li",
"Jie",
""
],
[
"Gaidon",
"Adrien",
""
]
] |
new_dataset
| 0.998444 |
2302.07387
|
Hui Ding
|
Jiang Liu, Hui Ding, Zhaowei Cai, Yuting Zhang, Ravi Kumar Satzoda,
Vijay Mahadevan, R. Manmatha
|
PolyFormer: Referring Image Segmentation as Sequential Polygon
Generation
|
CVPR 2023. Project Page: https://polyformer.github.io/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, instead of directly predicting the pixel-level segmentation
masks, the problem of referring image segmentation is formulated as sequential
polygon generation, and the predicted polygons can be later converted into
segmentation masks. This is enabled by a new sequence-to-sequence framework,
Polygon Transformer (PolyFormer), which takes a sequence of image patches and
text query tokens as input, and outputs a sequence of polygon vertices
autoregressively. For more accurate geometric localization, we propose a
regression-based decoder, which predicts the precise floating-point coordinates
directly, without any coordinate quantization error. In the experiments,
PolyFormer outperforms the prior art by a clear margin, e.g., 5.40% and 4.52%
absolute improvements on the challenging RefCOCO+ and RefCOCOg datasets. It
also shows strong generalization ability when evaluated on the referring video
segmentation task without fine-tuning, e.g., achieving competitive 61.5% J&F on
the Ref-DAVIS17 dataset.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 23:00:25 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2023 23:22:31 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Liu",
"Jiang",
""
],
[
"Ding",
"Hui",
""
],
[
"Cai",
"Zhaowei",
""
],
[
"Zhang",
"Yuting",
""
],
[
"Satzoda",
"Ravi Kumar",
""
],
[
"Mahadevan",
"Vijay",
""
],
[
"Manmatha",
"R.",
""
]
] |
new_dataset
| 0.988646 |
2303.13284
|
Debayan Banerjee
|
Debayan Banerjee, Pranav Ajit Nair, Ricardo Usbeck, Chris Biemann
|
GETT-QA: Graph Embedding based T2T Transformer for Knowledge Graph
Question Answering
|
16 pages single column format accepted at ESWC 2023 research track
| null | null | null |
cs.CL cs.DB cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we present an end-to-end Knowledge Graph Question Answering
(KGQA) system named GETT-QA. GETT-QA uses T5, a popular text-to-text
pre-trained language model. The model takes a question in natural language as
input and produces a simpler form of the intended SPARQL query. In the simpler
form, the model does not directly produce entity and relation IDs. Instead, it
produces corresponding entity and relation labels. The labels are grounded to
KG entity and relation IDs in a subsequent step. To further improve the
results, we instruct the model to produce a truncated version of the KG
embedding for each entity. The truncated KG embedding enables a finer search
for disambiguation purposes. We find that T5 is able to learn the truncated KG
embeddings without any change of loss function, improving KGQA performance. As
a result, we report strong results for LC-QuAD 2.0 and SimpleQuestions-Wikidata
datasets on end-to-end KGQA over Wikidata.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 14:06:26 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Mar 2023 14:59:15 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Mar 2023 09:48:50 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Banerjee",
"Debayan",
""
],
[
"Nair",
"Pranav Ajit",
""
],
[
"Usbeck",
"Ricardo",
""
],
[
"Biemann",
"Chris",
""
]
] |
new_dataset
| 0.992943 |
2303.14167
|
Yuanbo Yang
|
Yuanbo Yang, Yifei Yang, Hanlei Guo, Rong Xiong, Yue Wang, Yiyi Liao
|
UrbanGIRAFFE: Representing Urban Scenes as Compositional Generative
Neural Feature Fields
|
Project page: https://lv3d.github.io/urbanGIRAFFE
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generating photorealistic images with controllable camera pose and scene
contents is essential for many applications including AR/VR and simulation.
Despite the fact that rapid progress has been made in 3D-aware generative
models, most existing methods focus on object-centric images and are not
applicable to generating urban scenes for free camera viewpoint control and
scene editing. To address this challenging task, we propose UrbanGIRAFFE, which
uses a coarse 3D panoptic prior, including the layout distribution of
uncountable stuff and countable objects, to guide a 3D-aware generative model.
Our model is compositional and controllable as it breaks down the scene into
stuff, objects, and sky. Using stuff prior in the form of semantic voxel grids,
we build a conditioned stuff generator that effectively incorporates the coarse
semantic and geometry information. The object layout prior further allows us to
learn an object generator from cluttered scenes. With proper loss functions,
our approach facilitates photorealistic 3D-aware image synthesis with diverse
controllability, including large camera movement, stuff editing, and object
manipulation. We validate the effectiveness of our model on both synthetic and
real-world datasets, including the challenging KITTI-360 dataset.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 17:28:07 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Mar 2023 02:00:39 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Yang",
"Yuanbo",
""
],
[
"Yang",
"Yifei",
""
],
[
"Guo",
"Hanlei",
""
],
[
"Xiong",
"Rong",
""
],
[
"Wang",
"Yue",
""
],
[
"Liao",
"Yiyi",
""
]
] |
new_dataset
| 0.999421 |
2303.15539
|
Hongyi Xu
|
Hongyi Xu, Guoxian Song, Zihang Jiang, Jianfeng Zhang, Yichun Shi,
Jing Liu, Wanchun Ma, Jiashi Feng, Linjie Luo
|
OmniAvatar: Geometry-Guided Controllable 3D Head Synthesis
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present OmniAvatar, a novel geometry-guided 3D head synthesis model
trained from in-the-wild unstructured images that is capable of synthesizing
diverse identity-preserved 3D heads with compelling dynamic details under full
disentangled control over camera poses, facial expressions, head shapes,
articulated neck and jaw poses. To achieve such high level of disentangled
control, we first explicitly define a novel semantic signed distance function
(SDF) around a head geometry (FLAME) conditioned on the control parameters.
This semantic SDF allows us to build a differentiable volumetric correspondence
map from the observation space to a disentangled canonical space from all the
control parameters. We then leverage the 3D-aware GAN framework (EG3D) to
synthesize detailed shape and appearance of 3D full heads in the canonical
space, followed by a volume rendering step guided by the volumetric
correspondence map to output into the observation space. To ensure the control
accuracy on the synthesized head shapes and expressions, we introduce a
geometry prior loss to conform to head SDF and a control loss to conform to the
expression code. Further, we enhance the temporal realism with dynamic details
conditioned upon varying expressions and joint poses. Our model can synthesize
more preferable identity-preserved 3D heads with compelling dynamic details
compared to the state-of-the-art methods both qualitatively and quantitatively.
We also provide an ablation study to justify many of our system design choices.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 18:36:53 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Xu",
"Hongyi",
""
],
[
"Song",
"Guoxian",
""
],
[
"Jiang",
"Zihang",
""
],
[
"Zhang",
"Jianfeng",
""
],
[
"Shi",
"Yichun",
""
],
[
"Liu",
"Jing",
""
],
[
"Ma",
"Wanchun",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Luo",
"Linjie",
""
]
] |
new_dataset
| 0.999513 |
2303.15540
|
Zhongshu Gu
|
Pau-Chen Cheng, Wojciech Ozga, Enriquillo Valdez, Salman Ahmed,
Zhongshu Gu, Hani Jamjoom, Hubertus Franke, James Bottomley
|
Intel TDX Demystified: A Top-Down Approach
| null | null | null | null |
cs.CR cs.OS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intel Trust Domain Extensions (TDX) is a new architectural extension in the
4th Generation Intel Xeon Scalable Processor that supports confidential
computing. TDX allows the deployment of virtual machines in the
Secure-Arbitration Mode (SEAM) with encrypted CPU state and memory, integrity
protection, and remote attestation. TDX aims to enforce hardware-assisted
isolation for virtual machines and minimize the attack surface exposed to host
platforms, which are considered to be untrustworthy or adversarial in the
confidential computing's new threat model. TDX can be leveraged by regulated
industries or sensitive data holders to outsource their computations and data
with end-to-end protection in public cloud infrastructure.
This paper aims to provide a comprehensive understanding of TDX to potential
adopters, domain experts, and security researchers looking to leverage the
technology for their own purposes. We adopt a top-down approach, starting with
high-level security principles and moving to low-level technical details of
TDX. Our analysis is based on publicly available documentation and source code,
offering insights from security researchers outside of Intel.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 18:38:28 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Cheng",
"Pau-Chen",
""
],
[
"Ozga",
"Wojciech",
""
],
[
"Valdez",
"Enriquillo",
""
],
[
"Ahmed",
"Salman",
""
],
[
"Gu",
"Zhongshu",
""
],
[
"Jamjoom",
"Hani",
""
],
[
"Franke",
"Hubertus",
""
],
[
"Bottomley",
"James",
""
]
] |
new_dataset
| 0.999796 |
2303.15556
|
Timothy Gomez
|
Robert M. Alaniz and Josh Brunner and Michael Coulombe and Erik D.
Demaine and Yevhenii Diomidov and Ryan Knobel and Timothy Gomez and Elise
Grizzell and Jayson Lynch and Andrew Rodriguez and Robert Schweller and Tim
Wylie
|
Complexity of Reconfiguration in Surface Chemical Reaction Networks
| null | null | null | null |
cs.CC
|
http://creativecommons.org/licenses/by/4.0/
|
We analyze the computational complexity of basic reconfiguration problems for
the recently introduced surface Chemical Reaction Networks (sCRNs), where
ordered pairs of adjacent species nondeterministically transform into a
different ordered pair of species according to a predefined set of allowed
transition rules (chemical reactions). In particular, two questions that are
fundamental to the simulation of sCRNs are whether a given configuration of
molecules can ever transform into another given configuration, and whether a
given cell can ever contain a given species, given a set of transition rules.
We show that these problems can be solved in polynomial time, are NP-complete,
or are PSPACE-complete in a variety of different settings, including when
adjacent species just swap instead of arbitrary transformation (swap sCRNs),
and when cells can change species a limited number of times (k-burnout). Most
problems turn out to be at least NP-hard except with very few distinct species
(2 or 3).
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 19:14:50 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Alaniz",
"Robert M.",
""
],
[
"Brunner",
"Josh",
""
],
[
"Coulombe",
"Michael",
""
],
[
"Demaine",
"Erik D.",
""
],
[
"Diomidov",
"Yevhenii",
""
],
[
"Knobel",
"Ryan",
""
],
[
"Gomez",
"Timothy",
""
],
[
"Grizzell",
"Elise",
""
],
[
"Lynch",
"Jayson",
""
],
[
"Rodriguez",
"Andrew",
""
],
[
"Schweller",
"Robert",
""
],
[
"Wylie",
"Tim",
""
]
] |
new_dataset
| 0.954909 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.