id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2210.02883
|
Tao Yu
|
Tao Yu, Shunqing Zhang, Xiaojing Chen and Xin Wang
|
A Novel Energy Efficiency Metric for Next Generation Wireless
Communication Networks
| null | null | null | null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a core performance metric for green communications, the conventional
energy efficiency definition has successfully resolved many issues in the
energy efficient wireless network design. In the past several generations of
wireless communication networks, the traditional energy efficiency measure
plays an important role to guide many energy saving techniques for slow varying
traffic profiles. However, for the next generation wireless networks, the
traditional energy efficiency fails to capture the traffic and capacity
variations of wireless networks in temporal or spatial domains, which is shown
to be quite popular, especially with ultra-scale multiple antennas and
space-air-ground integrated network. In this paper, we present a novel energy
efficiency metric named integrated relative energy efficiency (IREE), which is
able to jointly measure the traffic profiles and the network capacities from
the energy efficiency perspective. On top of that, the IREE based green
trade-offs have been investigated and compared with the conventional energy
efficient design. Moreover, we apply the IREE based green trade-offs to
evaluate several candidate technologies for 6G networks, including
reconfigurable intelligent surfaces and space-air-ground integrated network.
Through some analytical and numerical results, we show that the proposed IREE
metric is able to capture the wireless traffic and capacity mismatch property,
which is significantly different from the conventional energy efficiency
metric. Since the IREE oriented design or deployment strategy is able to
consider the network capacity improvement and the wireless traffic matching
simultaneously, it can be regarded as a useful guidance for future energy
efficient network design.
|
[
{
"version": "v1",
"created": "Sat, 10 Sep 2022 01:29:36 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Yu",
"Tao",
""
],
[
"Zhang",
"Shunqing",
""
],
[
"Chen",
"Xiaojing",
""
],
[
"Wang",
"Xin",
""
]
] |
new_dataset
| 0.980761 |
2210.02904
|
Zixing Zhang
|
Zixing Zhang, Thorin Farnsworth, Senling Lin, Salah Karout
|
WakeUpNet: A Mobile-Transformer based Framework for End-to-End Streaming
Voice Trigger
| null | null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
End-to-end models have gradually become the main technical stream for voice
trigger, aiming to achieve an utmost prediction accuracy but with a small
footprint. In present paper, we propose an end-to-end voice trigger framework,
namely WakeupNet, which is basically structured on a Transformer encoder. The
purpose of this framework is to explore the context-capturing capability of
Transformer, as sequential information is vital for wakeup-word detection.
However, the conventional Transformer encoder is too large to fit our task. To
address this issue, we introduce different model compression approaches to
shrink the vanilla one into a tiny one, called mobile-Transformer. To evaluate
the performance of mobile-Transformer, we conduct extensive experiments on a
large public-available dataset HiMia. The obtained results indicate that
introduced mobile-Transformer significantly outperforms other frequently used
models for voice trigger in both clean and noisy scenarios.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 13:18:48 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Zhang",
"Zixing",
""
],
[
"Farnsworth",
"Thorin",
""
],
[
"Lin",
"Senling",
""
],
[
"Karout",
"Salah",
""
]
] |
new_dataset
| 0.999375 |
2210.02925
|
Manfred Kufleitner
|
Manfred Kufleitner
|
Yet another proof of Parikh's Theorem
| null | null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Parikh's Theorem says that the Parikh image of a context-free language is
semilinear. We give a short proof of Parikh's Theorem using the formulation of
Verma, Seidl, and Schwentick in terms of Presburger arithmetic. The proof
relies on an Eulerian property of derivation trees of context-free languages
and was inspired by Hierholzer's algorithm; it does not use the Chomsky normal
form.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 13:56:27 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Kufleitner",
"Manfred",
""
]
] |
new_dataset
| 0.998929 |
2210.02946
|
Songhao Han
|
Songhao Han (1), Wei Huang (1), Xiaotian Luan (2) ((1) Beihang
University, (2) Peking University)
|
VLSNR:Vision-Linguistics Coordination Time Sequence-aware News
Recommendation
|
10 pages, 5 figures
| null | null | null |
cs.IR cs.AI cs.LG cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
News representation and user-oriented modeling are both essential for news
recommendation. Most existing methods are based on textual information but
ignore the visual information and users' dynamic interests. However, compared
to textual only content, multimodal semantics is beneficial for enhancing the
comprehension of users' temporal and long-lasting interests. In our work, we
propose a vision-linguistics coordinate time sequence news recommendation.
Firstly, a pretrained multimodal encoder is applied to embed images and texts
into the same feature space. Then the self-attention network is used to learn
the chronological sequence. Additionally, an attentional GRU network is
proposed to model user preference in terms of time adequately. Finally, the
click history and user representation are embedded to calculate the ranking
scores for candidate news. Furthermore, we also construct a large scale
multimodal news recommendation dataset V-MIND. Experimental results show that
our model outperforms baselines and achieves SOTA on our independently
constructed dataset.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 14:27:37 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Han",
"Songhao",
""
],
[
"Huang",
"Wei",
""
],
[
"Luan",
"Xiaotian",
""
]
] |
new_dataset
| 0.99894 |
2210.02987
|
Sharif Jacobino
|
Sharif Jacobino, Johan Pouwelse
|
TrustVault: A privacy-first data wallet for the European Blockchain
Services Infrastructure
| null | null | null | null |
cs.DC cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
The European Union is on course to introduce a European Digital Identity that
will be available to all EU citizens and businesses. This will have a huge
impact on how citizens and businesses interact online. Big Tech companies
currently dictate how digital identities are used. As a result, they have
amassed vast amounts of private user data. Movements like Self-Sovereign
Identity aim to give users control over their online identity. TrustVault is
the first data wallet that gives users back control of their identity and all
their data. TrustVault allows users to store all their data on their
smartphones and control with whom they share it. The user has fine-grained
access control based on verifiable user attributes. EBSI connects TrustVault to
the European Self-Sovereign Identity Framework allowing users to use Verifiable
Credentials from public and private institutions in their access control
policies. The system is serverless and has no Trusted Third Parties. TrustVault
replaces the for-profit infrastructure of Big Tech with a public and
transparent platform for innovation.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 15:23:55 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Jacobino",
"Sharif",
""
],
[
"Pouwelse",
"Johan",
""
]
] |
new_dataset
| 0.999294 |
2210.03007
|
Hassan Abu Alhaija
|
Hassan Abu Alhaija, Alara Dirik, Andr\'e Kn\"orig, Sanja Fidler, Maria
Shugrina
|
XDGAN: Multi-Modal 3D Shape Generation in 2D Space
| null | null | null | null |
cs.CV cs.GR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Generative models for 2D images has recently seen tremendous progress in
quality, resolution and speed as a result of the efficiency of 2D convolutional
architectures. However it is difficult to extend this progress into the 3D
domain since most current 3D representations rely on custom network components.
This paper addresses a central question: Is it possible to directly leverage 2D
image generative models to generate 3D shapes instead? To answer this, we
propose XDGAN, an effective and fast method for applying 2D image GAN
architectures to the generation of 3D object geometry combined with additional
surface attributes, like color textures and normals. Specifically, we propose a
novel method to convert 3D shapes into compact 1-channel geometry images and
leverage StyleGAN3 and image-to-image translation networks to generate 3D
objects in 2D space. The generated geometry images are quick to convert to 3D
meshes, enabling real-time 3D object synthesis, visualization and interactive
editing. Moreover, the use of standard 2D architectures can help bring more 2D
advances into the 3D realm. We show both quantitatively and qualitatively that
our method is highly effective at various tasks such as 3D shape generation,
single view reconstruction and shape manipulation, while being significantly
faster and more flexible compared to recent 3D generative models.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 15:54:01 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Alhaija",
"Hassan Abu",
""
],
[
"Dirik",
"Alara",
""
],
[
"Knörig",
"André",
""
],
[
"Fidler",
"Sanja",
""
],
[
"Shugrina",
"Maria",
""
]
] |
new_dataset
| 0.984657 |
2210.03014
|
Yiwei Zhang
|
Yiwei Zhang, Siqi Ma, Tiancheng Chen, Juanru Li, Robert H. Deng, Elisa
Bertino
|
EvilScreen Attack: Smart TV Hijacking via Multi-channel Remote Control
Mimicry
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern smart TVs often communicate with their remote controls (including
those smart phone simulated ones) using multiple wireless channels (e.g.,
Infrared, Bluetooth, and Wi-Fi). However, this multi-channel remote control
communication introduces a new attack surface. An inherent security flaw is
that remote controls of most smart TVs are designed to work in a benign
environment rather than an adversarial one, and thus wireless communications
between a smart TV and its remote controls are not strongly protected.
Attackers could leverage such flaw to abuse the remote control communication
and compromise smart TV systems. In this paper, we propose EvilScreen, a novel
attack that exploits ill-protected remote control communications to access
protected resources of a smart TV or even control the screen. EvilScreen
exploits a multi-channel remote control mimicry vulnerability present in today
smart TVs. Unlike other attacks, which compromise the TV system by exploiting
code vulnerabilities or malicious third-party apps, EvilScreen directly reuses
commands of different remote controls, combines them together to circumvent
deployed authentication and isolation policies, and finally accesses or
controls TV resources remotely. We evaluated eight mainstream smart TVs and
found that they are all vulnerable to EvilScreen attacks, including a Samsung
product adopting the ISO/IEC security specification.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 16:02:37 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Zhang",
"Yiwei",
""
],
[
"Ma",
"Siqi",
""
],
[
"Chen",
"Tiancheng",
""
],
[
"Li",
"Juanru",
""
],
[
"Deng",
"Robert H.",
""
],
[
"Bertino",
"Elisa",
""
]
] |
new_dataset
| 0.999413 |
2210.03027
|
Yuecheng Zhou
|
Yuecheng Zhou, Yaolong Ju, Lingyun Xie
|
AnimeTAB: A new guitar tablature dataset of anime and game music
| null | null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While guitar tablature has become a popular topic in MIR research, there
exists no such a guitar tablature dataset that focuses on the soundtracks of
anime and video games, which have a surprisingly broad and growing audience
among the youths. In this paper, we present AnimeTAB, a fingerstyle guitar
tablature dataset in MusicXML format, which provides more high-quality guitar
tablature for both researchers and guitar players. AnimeTAB contains 412 full
tracks and 547 clips, the latter are annotated with musical structures (intro,
verse, chorus, and bridge). An accompanying analysis toolkit, TABprocessor, is
included to further facilitate its use. This includes functions for melody and
bassline extraction, key detection, and chord labeling, which are implemented
using rule-based algorithms. We evaluated each of these functions against a
manually annotated ground truth. Finally, as an example, we performed a music
and technique analysis of AnimeTAB using TABprocessor. Our data and code have
been made publicly available for composers, performers, and music information
retrieval (MIR) researchers alike.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 16:21:26 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Zhou",
"Yuecheng",
""
],
[
"Ju",
"Yaolong",
""
],
[
"Xie",
"Lingyun",
""
]
] |
new_dataset
| 0.99986 |
2210.03040
|
Yuchao Dai Dr.
|
Bin Fan, Yuchao Dai and Hongdong Li
|
Rolling Shutter Inversion: Bring Rolling Shutter Images to High
Framerate Global Shutter Video
|
Accepted by IEEE Transactions on Pattern Analysis and Machine
Intelligence (IEEE TPAMI), 16 Pages, 14 Figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
A single rolling-shutter (RS) image may be viewed as a row-wise combination
of a sequence of global-shutter (GS) images captured by a (virtual) moving GS
camera within the exposure duration. Although RS cameras are widely used, the
RS effect causes obvious image distortion especially in the presence of fast
camera motion, hindering downstream computer vision tasks. In this paper, we
propose to invert the RS image capture mechanism, i.e., recovering a continuous
high framerate GS video from two time-consecutive RS frames. We call this task
the RS temporal super-resolution (RSSR) problem. The RSSR is a very challenging
task, and to our knowledge, no practical solution exists to date. This paper
presents a novel deep-learning based solution. By leveraging the multi-view
geometry relationship of the RS imaging process, our learning-based framework
successfully achieves high framerate GS generation. Specifically, three novel
contributions can be identified: (i) novel formulations for bidirectional RS
undistortion flows under constant velocity as well as constant acceleration
motion model. (ii) a simple linear scaling operation, which bridges the RS
undistortion flow and regular optical flow. (iii) a new mutual conversion
scheme between varying RS undistortion flows that correspond to different
scanlines. Our method also exploits the underlying spatial-temporal geometric
relationships within a deep learning framework, where no additional supervision
is required beyond the necessary middle-scanline GS image. Building upon these
contributions, we represent the very first rolling-shutter temporal
super-resolution deep-network that is able to recover high framerate GS videos
from just two RS frames. Extensive experimental results on both synthetic and
real data show that our proposed method can produce high-quality GS image
sequences with rich details, outperforming the state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 16:47:12 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Fan",
"Bin",
""
],
[
"Dai",
"Yuchao",
""
],
[
"Li",
"Hongdong",
""
]
] |
new_dataset
| 0.9575 |
2210.03065
|
Wonse Jo
|
Wonse Jo, Ruiqi Wang, Su Sun, Revanth Krishna Senthilkumaran, Daniel
Foti, and Byung-Cheol Min
|
MOCAS: A Multimodal Dataset for Objective Cognitive Workload Assessment
on Simultaneous Tasks
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents MOCAS, a multimodal dataset dedicated for human cognitive
workload (CWL) assessment. In contrast to existing datasets based on virtual
game stimuli, the data in MOCAS was collected from realistic closed-circuit
television (CCTV) monitoring tasks, increasing its applicability for real-world
scenarios. To build MOCAS, two off-the-shelf wearable sensors and one webcam
were utilized to collect physiological signals and behavioral features from 21
human subjects. After each task, participants reported their CWL by completing
the NASA-Task Load Index (NASA-TLX) and Instantaneous Self-Assessment (ISA).
Personal background (e.g., personality and prior experience) was surveyed using
demographic and Big Five Factor personality questionnaires, and two domains of
subjective emotion information (i.e., arousal and valence) were obtained from
the Self-Assessment Manikin, which could serve as potential indicators for
improving CWL recognition performance. Technical validation was conducted to
demonstrate that target CWL levels were elicited during simultaneous CCTV
monitoring tasks; its results support the high quality of the collected
multimodal signals.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 17:20:15 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Jo",
"Wonse",
""
],
[
"Wang",
"Ruiqi",
""
],
[
"Sun",
"Su",
""
],
[
"Senthilkumaran",
"Revanth Krishna",
""
],
[
"Foti",
"Daniel",
""
],
[
"Min",
"Byung-Cheol",
""
]
] |
new_dataset
| 0.999886 |
2210.03072
|
Giuseppe Stragapede
|
Giuseppe Stragapede, Ruben Vera-Rodriguez, Ruben Tolosana, Aythami
Morales, Julian Fierrez, Javier Ortega-Garcia, Sanka Rasnayaka, Sachith
Seneviratne, Vipula Dissanayake, Jonathan Liebers, Ashhadul Islam, Samir
Brahim Belhaouari, Sumaiya Ahmad, Suraiya Jabin
|
IJCB 2022 Mobile Behavioral Biometrics Competition (MobileB2C)
| null | null | null | null |
cs.CV cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper describes the experimental framework and results of the IJCB 2022
Mobile Behavioral Biometrics Competition (MobileB2C). The aim of MobileB2C is
benchmarking mobile user authentication systems based on behavioral biometric
traits transparently acquired by mobile devices during ordinary Human-Computer
Interaction (HCI), using a novel public database, BehavePassDB, and a standard
experimental protocol. The competition is divided into four tasks corresponding
to typical user activities: keystroke, text reading, gallery swiping, and
tapping. The data are composed of touchscreen data and several background
sensor data simultaneously acquired. "Random" (different users with different
devices) and "skilled" (different user on the same device attempting to imitate
the legitimate one) impostor scenarios are considered. The results achieved by
the participants show the feasibility of user authentication through behavioral
biometrics, although this proves to be a non-trivial challenge. MobileB2C will
be established as an on-going competition.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 17:27:05 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Stragapede",
"Giuseppe",
""
],
[
"Vera-Rodriguez",
"Ruben",
""
],
[
"Tolosana",
"Ruben",
""
],
[
"Morales",
"Aythami",
""
],
[
"Fierrez",
"Julian",
""
],
[
"Ortega-Garcia",
"Javier",
""
],
[
"Rasnayaka",
"Sanka",
""
],
[
"Seneviratne",
"Sachith",
""
],
[
"Dissanayake",
"Vipula",
""
],
[
"Liebers",
"Jonathan",
""
],
[
"Islam",
"Ashhadul",
""
],
[
"Belhaouari",
"Samir Brahim",
""
],
[
"Ahmad",
"Sumaiya",
""
],
[
"Jabin",
"Suraiya",
""
]
] |
new_dataset
| 0.989582 |
2210.03118
|
Matteo Poggi
|
Andrea Conti, Matteo Poggi, Filippo Aleotti and Stefano Mattoccia
|
Unsupervised confidence for LiDAR depth maps and applications
|
IROS 2022. Code available at
https://github.com/andreaconti/lidar-confidence
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Depth perception is pivotal in many fields, such as robotics and autonomous
driving, to name a few. Consequently, depth sensors such as LiDARs rapidly
spread in many applications. The 3D point clouds generated by these sensors
must often be coupled with an RGB camera to understand the framed scene
semantically. Usually, the former is projected over the camera image plane,
leading to a sparse depth map. Unfortunately, this process, coupled with the
intrinsic issues affecting all the depth sensors, yields noise and gross
outliers in the final output. Purposely, in this paper, we propose an effective
unsupervised framework aimed at explicitly addressing this issue by learning to
estimate the confidence of the LiDAR sparse depth map and thus allowing for
filtering out the outliers. Experimental results on the KITTI dataset highlight
that our framework excels for this purpose. Moreover, we demonstrate how this
achievement can improve a wide range of tasks.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 17:59:58 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Conti",
"Andrea",
""
],
[
"Poggi",
"Matteo",
""
],
[
"Aleotti",
"Filippo",
""
],
[
"Mattoccia",
"Stefano",
""
]
] |
new_dataset
| 0.974109 |
2103.10997
|
Hamidreza Kasaei
|
Hamidreza Kasaei, Mohammadreza Kasaei
|
MVGrasp: Real-Time Multi-View 3D Object Grasping in Highly Cluttered
Environments
|
The video of our experiments can be found here:
https://youtu.be/c-4lzjbF7fY
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Nowadays robots play an increasingly important role in our daily life. In
human-centered environments, robots often encounter piles of objects, packed
items, or isolated objects. Therefore, a robot must be able to grasp and
manipulate different objects in various situations to help humans with daily
tasks. In this paper, we propose a multi-view deep learning approach to handle
robust object grasping in human-centric domains. In particular, our approach
takes a point cloud of an arbitrary object as an input, and then, generates
orthographic views of the given object. The obtained views are finally used to
estimate pixel-wise grasp synthesis for each object. We train the model
end-to-end using a small object grasp dataset and test it on both simulations
and real-world data without any further fine-tuning. To evaluate the
performance of the proposed approach, we performed extensive sets of
experiments in three scenarios, including isolated objects, packed items, and
pile of objects. Experimental results show that our approach performed very
well in all simulation and real-robot scenarios, and is able to achieve
reliable closed-loop grasping of novel objects across various scene
configurations.
|
[
{
"version": "v1",
"created": "Fri, 19 Mar 2021 19:38:00 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Jun 2021 13:56:58 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Sep 2021 10:22:17 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Mar 2022 20:21:35 GMT"
},
{
"version": "v5",
"created": "Mon, 25 Jul 2022 06:53:29 GMT"
},
{
"version": "v6",
"created": "Wed, 5 Oct 2022 15:22:14 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Kasaei",
"Hamidreza",
""
],
[
"Kasaei",
"Mohammadreza",
""
]
] |
new_dataset
| 0.999749 |
2111.04867
|
Aashaka Shah
|
Aashaka Shah, Vijay Chidambaram, Meghan Cowan, Saeed Maleki, Madan
Musuvathi, Todd Mytkowicz, Jacob Nelson, Olli Saarikivi, Rachee Singh
|
TACCL: Guiding Collective Algorithm Synthesis using Communication
Sketches
|
Accepted at NSDI'23. Contains 20 pages, 11 figures, including
Appendix
| null | null | null |
cs.DC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning models are increasingly being trained across multiple GPUs
and servers. In this setting, data is transferred between GPUs using
communication collectives such as AlltoAll and AllReduce, which can become a
significant bottleneck in training large models. Thus, it is important to use
efficient algorithms for collective communication. We develop TACCL, a tool
that enables algorithm designers to guide a synthesizer into automatically
generating algorithms for a given hardware configuration and communication
collective. TACCL uses a novel communication sketch abstraction to get crucial
information from the designer to significantly reduce the search space and
guide the synthesizer towards better algorithms. TACCL also uses a novel
encoding of the problem that allows it to scale beyond single-node topologies.
We use TACCL to synthesize algorithms for three collectives and two hardware
topologies: DGX-2 and NDv2. We demonstrate that the algorithms synthesized by
TACCL outperform the Nvidia Collective Communication Library (NCCL) by up to
6.7x. We also show that TACCL can speed up end-to-end training of
Transformer-XL and BERT models by 11%--2.3x for different batch sizes.
|
[
{
"version": "v1",
"created": "Mon, 8 Nov 2021 23:20:52 GMT"
},
{
"version": "v2",
"created": "Mon, 15 Nov 2021 17:20:28 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Jul 2022 00:16:32 GMT"
},
{
"version": "v4",
"created": "Wed, 5 Oct 2022 05:01:59 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Shah",
"Aashaka",
""
],
[
"Chidambaram",
"Vijay",
""
],
[
"Cowan",
"Meghan",
""
],
[
"Maleki",
"Saeed",
""
],
[
"Musuvathi",
"Madan",
""
],
[
"Mytkowicz",
"Todd",
""
],
[
"Nelson",
"Jacob",
""
],
[
"Saarikivi",
"Olli",
""
],
[
"Singh",
"Rachee",
""
]
] |
new_dataset
| 0.958255 |
2112.11941
|
Frank Binder
|
J\"org Frohberg and Frank Binder
|
CRASS: A Novel Data Set and Benchmark to Test Counterfactual Reasoning
of Large Language Models
|
10 pages including references, plus 5 pages appendix. Edits for
version 3 vs LREC 2022: Point out human baseline in abstract (also to match
arxiv abstract), fix affiliation apergo.ai, and fix a recurring typo
|
Proceedings of the 13th Language Resources and Evaluation
Conference (LREC 2022), Marseille, France pp. 2126-2140 (2022)
https://aclanthology.org/2022.lrec-1.229/
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the CRASS (counterfactual reasoning assessment) data set and
benchmark utilizing questionized counterfactual conditionals as a novel and
powerful tool to evaluate large language models. We present the data set design
and benchmark that supports scoring against a crowd-validated human baseline.
We test six state-of-the-art models against our benchmark. Our results show
that it poses a valid challenge for these models and opens up considerable room
for their improvement.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 15:03:23 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Jun 2022 06:52:42 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Oct 2022 19:03:40 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Frohberg",
"Jörg",
""
],
[
"Binder",
"Frank",
""
]
] |
new_dataset
| 0.999791 |
2201.09825
|
Thorsten Wi{\ss}mann
|
Thorsten Wi{\ss}mann
|
Supported Sets -- A New Foundation For Nominal Sets And Automata
| null | null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The present work proposes and discusses the category of supported sets which
provides a uniform foundation for nominal sets of various kinds, such as those
for equality symmetry, for the order symmetry, and renaming sets. We show that
all these differently flavoured categories of nominal sets are monadic over
supported sets. Thus, supported sets provide a canonical finite way to
represent nominal sets and the automata therein, e.g. register automata. Name
binding in supported sets is modelled by a functor following the idea of de
Bruijn indices. This functor lifts to the well-known abstraction functor in
nominal sets. Together with the monadicity result, this gives rise to a
transformation process that takes the finite representation of a register
automaton in supported sets and transforms it into its configuration automaton
in nominal sets.
|
[
{
"version": "v1",
"created": "Mon, 24 Jan 2022 17:41:53 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Oct 2022 10:35:11 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Wißmann",
"Thorsten",
""
]
] |
new_dataset
| 0.999108 |
2203.00307
|
Jing Tan
|
Jing Tan, Yuhong Wang, Gangshan Wu, Limin Wang
|
Temporal Perceiver: A General Architecture for Arbitrary Boundary
Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generic Boundary Detection (GBD) aims at locating the general boundaries that
divide videos into semantically coherent and taxonomy-free units, and could
serve as an important pre-processing step for long-form video understanding.
Previous works often separately handle these different types of generic
boundaries with specific designs of deep networks from simple CNN to LSTM.
Instead, in this paper, we present Temporal Perceiver, a general architecture
with Transformer, offering a unified solution to the detection of arbitrary
generic boundaries, ranging from shot-level, event-level, to scene-level GBDs.
The core design is to introduce a small set of latent feature queries as
anchors to compress the redundant video input into a fixed dimension via
cross-attention blocks. Thanks to this fixed number of latent units, it greatly
reduces the quadratic complexity of attention operation to a linear form of
input frames. Specifically, to explicitly leverage the temporal structure of
videos, we construct two types of latent feature queries: boundary queries and
context queries, which handle the semantic incoherence and coherence
accordingly. Moreover, to guide the learning of latent feature queries, we
propose an alignment loss on the cross-attention maps to explicitly encourage
the boundary queries to attend on the top boundary candidates. Finally, we
present a sparse detection head on the compressed representation, and directly
output the final boundary detection results without any post-processing module.
We test our Temporal Perceiver on a variety of GBD benchmarks. Our method
obtains the state-of-the-art results on all benchmarks with RGB single-stream
features: SoccerNet-v2 (81.9% avg-mAP), Kinetics-GEBD (86.0% avg-f1), TAPOS
(73.2% avg-f1), MovieScenes (51.9% AP and 53.1% Miou) and MovieNet (53.3% AP
and 53.2% Miou), demonstrating the generalization ability of our Temporal
Perceiver.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 09:31:30 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Oct 2022 08:27:53 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Tan",
"Jing",
""
],
[
"Wang",
"Yuhong",
""
],
[
"Wu",
"Gangshan",
""
],
[
"Wang",
"Limin",
""
]
] |
new_dataset
| 0.999067 |
2205.12796
|
Yang Li
|
Yang Li and Tatsuya Harada
|
Non-rigid Point Cloud Registration with Neural Deformation Pyramid
|
NeurIPS'2022 camera ready. Code:
https://github.com/rabbityl/DeformationPyramid
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-rigid point cloud registration is a key component in many computer vision
and computer graphics applications. The high complexity of the unknown
non-rigid motion make this task a challenging problem. In this paper, we break
down this problem via hierarchical motion decomposition. Our method called
Neural Deformation Pyramid (NDP) represents non-rigid motion using a pyramid
architecture. Each pyramid level, denoted by a Multi-Layer Perception (MLP),
takes as input a sinusoidally encoded 3D point and outputs its motion
increments from the previous level. The sinusoidal function starts with a low
input frequency and gradually increases when the pyramid level goes down. This
allows a multi-level rigid to nonrigid motion decomposition and also speeds up
the solving by 50 times compared to the existing MLP-based approach. Our method
achieves advanced partialto-partial non-rigid point cloud registration results
on the 4DMatch/4DLoMatch benchmark under both no-learned and supervised
settings.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 14:10:33 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Jul 2022 11:38:29 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Oct 2022 12:12:09 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Li",
"Yang",
""
],
[
"Harada",
"Tatsuya",
""
]
] |
new_dataset
| 0.96955 |
2206.01386
|
Nick Gibbons
|
Nicholas N. Gibbons and Kyle A. Damm and Peter A. Jacobs and Rowan J.
Gollan
|
Eilmer: an Open-Source Multi-Physics Hypersonic Flow Solver
| null |
Comput. Phys. Commun. 282 (2023) Article 108551
|
10.1016/j.cpc.2022.108551
| null |
cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces Eilmer, a general-purpose open-source compressible flow
solver developed at the University of Queensland, designed to support research
calculations in hypersonics and high-speed aerothermodynamics. Eilmer has a
broad userbase in several university research groups and a wide range of
capabilities, which are documented on the project's website, in the
accompanying reference manuals, and in an extensive catalogue of example
simulations. The first part of this paper describes the formulation of the
code: the equations, physical models, and numerical methods that are used in a
basic fluid dynamics simulation, as well as a handful of optional multi-physics
models that are commonly added on to do calculations of hypersonic flow. The
second section describes the processes used to develop and maintain the code,
documenting our adherence to good programming practice and endorsing certain
techniques that seem to be particularly helpful for scientific codes. The final
section describes a half-dozen example simulations that span the range of
Eilmer's capabilities, each consisting of some sample results and a short
explanation of the problem being solved, which together will hopefully assist
new users in beginning to use Eilmer in their own research projects.
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2022 04:19:56 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Gibbons",
"Nicholas N.",
""
],
[
"Damm",
"Kyle A.",
""
],
[
"Jacobs",
"Peter A.",
""
],
[
"Gollan",
"Rowan J.",
""
]
] |
new_dataset
| 0.999691 |
2206.02780
|
Gene Chou
|
Gene Chou, Ilya Chugunov, Felix Heide
|
GenSDF: Two-Stage Learning of Generalizable Signed Distance Functions
| null | null | null | null |
cs.CV cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the generalization capabilities of neural signed distance
functions (SDFs) for learning 3D object representations for unseen and
unlabeled point clouds. Existing methods can fit SDFs to a handful of object
classes and boast fine detail or fast inference speeds, but do not generalize
well to unseen shapes. We introduce a two-stage semi-supervised meta-learning
approach that transfers shape priors from labeled to unlabeled data to
reconstruct unseen object categories. The first stage uses an episodic training
scheme to simulate training on unlabeled data and meta-learns initial shape
priors. The second stage then introduces unlabeled data with disjoint classes
in a semi-supervised scheme to diversify these priors and achieve
generalization. We assess our method on both synthetic data and real collected
point clouds. Experimental results and analysis validate that our approach
outperforms existing neural SDF methods and is capable of robust zero-shot
inference on 100+ unseen classes. Code can be found at
https://github.com/princeton-computational-imaging/gensdf.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 17:58:29 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Oct 2022 02:09:09 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Chou",
"Gene",
""
],
[
"Chugunov",
"Ilya",
""
],
[
"Heide",
"Felix",
""
]
] |
new_dataset
| 0.995423 |
2206.05224
|
Wonseok Hwang
|
Wonseok Hwang, Dongjun Lee, Kyoungyeon Cho, Hanuhl Lee, Minjoon Seo
|
A Multi-Task Benchmark for Korean Legal Language Understanding and
Judgement Prediction
|
Accepted at NeurIPS 2022 Datasets and Benchmarks track
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The recent advances of deep learning have dramatically changed how machine
learning, especially in the domain of natural language processing, can be
applied to legal domain. However, this shift to the data-driven approaches
calls for larger and more diverse datasets, which are nevertheless still small
in number, especially in non-English languages. Here we present the first
large-scale benchmark of Korean legal AI datasets, LBOX OPEN, that consists of
one legal corpus, two classification tasks, two legal judgement prediction
(LJP) tasks, and one summarization task. The legal corpus consists of 147k
Korean precedents (259M tokens), of which 63k are sentenced in last 4 years and
96k are from the first and the second level courts in which factual issues are
reviewed. The two classification tasks are case names (11.3k) and statutes
(2.8k) prediction from the factual description of individual cases. The LJP
tasks consist of (1) 10.5k criminal examples where the model is asked to
predict fine amount, imprisonment with labor, and imprisonment without labor
ranges for the given facts, and (2) 4.7k civil examples where the inputs are
facts and claim for relief and outputs are the degrees of claim acceptance. The
summarization task consists of the Supreme Court precedents and the
corresponding summaries (20k). We also release realistic variants of the
datasets by extending the domain (1) to infrequent case categories in case name
(31k examples) and statute (17.7k) classification tasks, and (2) to long input
sequences in the summarization task (51k). Finally, we release LCUBE, the first
Korean legal language model trained on the legal corpus from this study. Given
the uniqueness of the Law of South Korea and the diversity of the legal tasks
covered in this work, we believe that LBOX OPEN contributes to the
multilinguality of global legal research. LBOX OPEN and LCUBE will be publicly
available.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 16:51:45 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Oct 2022 11:08:34 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Hwang",
"Wonseok",
""
],
[
"Lee",
"Dongjun",
""
],
[
"Cho",
"Kyoungyeon",
""
],
[
"Lee",
"Hanuhl",
""
],
[
"Seo",
"Minjoon",
""
]
] |
new_dataset
| 0.999743 |
2206.06615
|
Yang Li
|
Yang Li, Ruhao Wan, Shixin Zhu
|
MDS Codes with Euclidean and Hermitian Hulls of Flexible Dimensions and
Their Applications to EAQECCs
|
25 pages, 5 tables
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The hull of a linear code is the intersection of itself with its dual code
with respect to certain inner product. Both Euclidean and Hermitian hulls are
of theorical and practical significance. In this paper, we construct several
new classes of MDS codes via (extended) generalized Reed-Solomon (GRS) codes
and determine their Euclidean or Hermitian hulls. Specifically, four new
classes of MDS codes with Hermitian hulls of flexible dimensions and six new
classes of MDS codes with Euclidean hulls of flexible dimensions are
constructed. For the former, we further construct four new classes of
entanglement-assisted quantum error-correcting codes (EAQECCs) and four new
classes of MDS EAQECCs of length $n>q+1$. For the latter, we also give some
examples on Euclidean self-orthogonal and one-dimensional Euclidean hull MDS
codes.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 06:22:41 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Oct 2022 14:33:05 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Li",
"Yang",
""
],
[
"Wan",
"Ruhao",
""
],
[
"Zhu",
"Shixin",
""
]
] |
new_dataset
| 0.998832 |
2206.08916
|
Christopher Clark
|
Jiasen Lu, Christopher Clark, Rowan Zellers, Roozbeh Mottaghi,
Aniruddha Kembhavi
|
Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose Unified-IO, a model that performs a large variety of AI tasks
spanning classical computer vision tasks, including pose estimation, object
detection, depth estimation and image generation, vision-and-language tasks
such as region captioning and referring expression, to natural language
processing tasks such as question answering and paraphrasing. Developing a
single unified model for such a large variety of tasks poses unique challenges
due to the heterogeneous inputs and outputs pertaining to each task, including
RGB images, per-pixel maps, binary masks, bounding boxes, and language. We
achieve this unification by homogenizing every supported input and output into
a sequence of discrete vocabulary tokens. This common representation across all
tasks allows us to train a single transformer-based architecture, jointly on
over 90 diverse datasets in the vision and language fields. Unified-IO is the
first model capable of performing all 7 tasks on the GRIT benchmark and
produces strong results across 16 diverse benchmarks like NYUv2-Depth,
ImageNet, VQA2.0, OK-VQA, Swig, VizWizGround, BoolQ, and SciTail, with no
task-specific fine-tuning. Code and demos for Unified-IO are available at:
https://unified-io.allenai.org.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 17:53:47 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Oct 2022 22:37:32 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Lu",
"Jiasen",
""
],
[
"Clark",
"Christopher",
""
],
[
"Zellers",
"Rowan",
""
],
[
"Mottaghi",
"Roozbeh",
""
],
[
"Kembhavi",
"Aniruddha",
""
]
] |
new_dataset
| 0.991486 |
2206.09457
|
Mehmet Emre Ozfatura
|
Emre Ozfatura, Yulin Shao, Alberto Perotti, Branislav Popovic, Deniz
Gunduz
|
All you need is feedback: Communication with block attention feedback
codes
| null | null | null | null |
cs.IT cs.AI cs.LG eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Deep learning based channel code designs have recently gained interest as an
alternative to conventional coding algorithms, particularly for channels for
which existing codes do not provide effective solutions. Communication over a
feedback channel is one such problem, for which promising results have recently
been obtained by employing various deep learning architectures. In this paper,
we introduce a novel learning-aided code design for feedback channels, called
generalized block attention feedback (GBAF) codes, which i) employs a modular
architecture that can be implemented using different neural network
architectures; ii) provides order-of-magnitude improvements in the probability
of error compared to existing designs; and iii) can transmit at desired code
rates.
|
[
{
"version": "v1",
"created": "Sun, 19 Jun 2022 17:55:04 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Oct 2022 16:13:17 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Ozfatura",
"Emre",
""
],
[
"Shao",
"Yulin",
""
],
[
"Perotti",
"Alberto",
""
],
[
"Popovic",
"Branislav",
""
],
[
"Gunduz",
"Deniz",
""
]
] |
new_dataset
| 0.997542 |
2207.08323
|
Jiahui Fu
|
Jiahui Fu, Chengyuan Lin, Yuichi Taguchi, Andrea Cohen, Yifu Zhang,
Stephen Mylabathula, and John J. Leonard
|
PlaneSDF-based Change Detection for Long-term Dense Mapping
|
8 pages, 7 figures, and 1 table. To be published in Robotics and
Automation Letters and IROS 2022. Link to supplementary video added in the
abstract: https://youtu.be/oh-MQPWTwZI
| null |
10.1109/LRA.2022.3191794
| null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The ability to process environment maps across multiple sessions is critical
for robots operating over extended periods of time. Specifically, it is
desirable for autonomous agents to detect changes amongst maps of different
sessions so as to gain a conflict-free understanding of the current
environment. In this paper, we look into the problem of change detection based
on a novel map representation, dubbed Plane Signed Distance Fields (PlaneSDF),
where dense maps are represented as a collection of planes and their associated
geometric components in SDF volumes. Given point clouds of the source and
target scenes, we propose a three-step PlaneSDF-based change detection
approach: (1) PlaneSDF volumes are instantiated within each scene and
registered across scenes using plane poses; 2D height maps and object maps are
extracted per volume via height projection and connected component analysis.
(2) Height maps are compared and intersected with the object map to produce a
2D change location mask for changed object candidates in the source scene. (3)
3D geometric validation is performed using SDF-derived features per object
candidate for change mask refinement. We evaluate our approach on both
synthetic and real-world datasets and demonstrate its effectiveness via the
task of changed object detection. Supplementary video:
https://youtu.be/oh-MQPWTwZI
|
[
{
"version": "v1",
"created": "Mon, 18 Jul 2022 00:19:45 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Oct 2022 17:43:12 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Fu",
"Jiahui",
""
],
[
"Lin",
"Chengyuan",
""
],
[
"Taguchi",
"Yuichi",
""
],
[
"Cohen",
"Andrea",
""
],
[
"Zhang",
"Yifu",
""
],
[
"Mylabathula",
"Stephen",
""
],
[
"Leonard",
"John J.",
""
]
] |
new_dataset
| 0.991914 |
2208.01421
|
Mikhail Usvyatsov
|
Mikhail Usvyatsov, Rafael Ballester-Rippoll, Lina Bashaeva, Konrad
Schindler, Gonzalo Ferrer, Ivan Oseledets
|
T4DT: Tensorizing Time for Learning Temporal 3D Visual Data
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Unlike 2D raster images, there is no single dominant representation for 3D
visual data processing. Different formats like point clouds, meshes, or
implicit functions each have their strengths and weaknesses. Still, grid
representations such as signed distance functions have attractive properties
also in 3D. In particular, they offer constant-time random access and are
eminently suitable for modern machine learning. Unfortunately, the storage size
of a grid grows exponentially with its dimension. Hence they often exceed
memory limits even at moderate resolution. This work proposes using low-rank
tensor formats, including the Tucker, tensor train, and quantics tensor train
decompositions, to compress time-varying 3D data. Our method iteratively
computes, voxelizes, and compresses each frame's truncated signed distance
function and applies tensor rank truncation to condense all frames into a
single, compressed tensor that represents the entire 4D scene. We show that
low-rank tensor compression is extremely compact to store and query
time-varying signed distance functions. It significantly reduces the memory
footprint of 4D scenes while remarkably preserving their geometric quality.
Unlike existing, iterative learning-based approaches like DeepSDF and NeRF, our
method uses a closed-form algorithm with theoretical guarantees.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 12:57:08 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Oct 2022 16:33:52 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Usvyatsov",
"Mikhail",
""
],
[
"Ballester-Rippoll",
"Rafael",
""
],
[
"Bashaeva",
"Lina",
""
],
[
"Schindler",
"Konrad",
""
],
[
"Ferrer",
"Gonzalo",
""
],
[
"Oseledets",
"Ivan",
""
]
] |
new_dataset
| 0.982953 |
2208.04358
|
Claudio Linhares D. G.
|
Claudio D. G. Linhares, Jean R. Ponciano, Diogenes S. Pedro, Luis E.
C. Rocha, Agma J. M. Traina, and Jorge Poco
|
LargeNetVis: Visual Exploration of Large Temporal Networks Based on
Community Taxonomies
|
11 pages, 9 figures
|
IEEE Transactions on Visualization and Computer Graphics, 2022
|
10.1109/TVCG.2022.3209477
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal (or time-evolving) networks are commonly used to model complex
systems and the evolution of their components throughout time. Although these
networks can be analyzed by different means, visual analytics stands out as an
effective way for a pre-analysis before doing quantitative/statistical analyses
to identify patterns, anomalies, and other behaviors in the data, thus leading
to new insights and better decision-making. However, the large number of nodes,
edges, and/or timestamps in many real-world networks may lead to polluted
layouts that make the analysis inefficient or even infeasible. In this paper,
we propose LargeNetVis, a web-based visual analytics system designed to assist
in analyzing small and large temporal networks. It successfully achieves this
goal by leveraging three taxonomies focused on network communities to guide the
visual exploration process. The system is composed of four interactive visual
components: the first (Taxonomy Matrix) presents a summary of the network
characteristics, the second (Global View) gives an overview of the network
evolution, the third (a node-link diagram) enables community- and node-level
structural analysis, and the fourth (a Temporal Activity Map -- TAM) shows the
community- and node-level activity under a temporal perspective.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 18:30:51 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Linhares",
"Claudio D. G.",
""
],
[
"Ponciano",
"Jean R.",
""
],
[
"Pedro",
"Diogenes S.",
""
],
[
"Rocha",
"Luis E. C.",
""
],
[
"Traina",
"Agma J. M.",
""
],
[
"Poco",
"Jorge",
""
]
] |
new_dataset
| 0.979116 |
2208.07220
|
Yue Liu
|
Yue Liu, Christos Matsoukas, Fredrik Strand, Hossein Azizpour, Kevin
Smith
|
PatchDropout: Economizing Vision Transformers Using Patch Dropout
|
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision transformers have demonstrated the potential to outperform CNNs in a
variety of vision tasks. But the computational and memory requirements of these
models prohibit their use in many applications, especially those that depend on
high-resolution images, such as medical image classification. Efforts to train
ViTs more efficiently are overly complicated, necessitating architectural
changes or intricate training schemes. In this work, we show that standard ViT
models can be efficiently trained at high resolution by randomly dropping input
image patches. This simple approach, PatchDropout, reduces FLOPs and memory by
at least 50% in standard natural image datasets such as ImageNet, and those
savings only increase with image size. On CSAW, a high-resolution medical
dataset, we observe a 5 times savings in computation and memory using
PatchDropout, along with a boost in performance. For practitioners with a fixed
computational or memory budget, PatchDropout makes it possible to choose image
resolution, hyperparameters, or model size to get the most performance out of
their model.
|
[
{
"version": "v1",
"created": "Wed, 10 Aug 2022 14:08:55 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Oct 2022 12:58:29 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Liu",
"Yue",
""
],
[
"Matsoukas",
"Christos",
""
],
[
"Strand",
"Fredrik",
""
],
[
"Azizpour",
"Hossein",
""
],
[
"Smith",
"Kevin",
""
]
] |
new_dataset
| 0.997429 |
2210.00146
|
Xiangcheng Hu
|
Jerred Chen, Xiangcheng Hu, Shicong Ma, Jianhao Jiao, Ming Liu, and
Frank Dellaert
|
FAST-LIO, Then Bayesian ICP, Then GTSFM
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
For the Hilti Challenge 2022, we created two systems, one building upon the
other. The first system is FL2BIPS which utilizes the iEKF algorithm FAST-LIO2
and Bayesian ICP PoseSLAM, whereas the second system is GTSFM, a structure from
motion pipeline with factor graph backend optimization powered by GTSAM
|
[
{
"version": "v1",
"created": "Sat, 1 Oct 2022 00:02:48 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Oct 2022 04:51:55 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Chen",
"Jerred",
""
],
[
"Hu",
"Xiangcheng",
""
],
[
"Ma",
"Shicong",
""
],
[
"Jiao",
"Jianhao",
""
],
[
"Liu",
"Ming",
""
],
[
"Dellaert",
"Frank",
""
]
] |
new_dataset
| 0.995134 |
2210.01839
|
Andrew Adamatzky
|
Anna Nikolaidou, Neil Phillips, Michail-Antisthenis Tsompanas, Andrew
Adamatzky
|
Reactive fungal insoles
| null | null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mycelium bound composites are promising materials for a diverse range of
applications including wearables and building elements. Their functionality
surpasses some of the capabilities of traditionally passive materials, such as
synthetic fibres, reconstituted cellulose fibres and natural fibres. Thereby,
creating novel propositions including augmented functionality (sensory) and
aesthetic (personal fashion). Biomaterials can offer multiple modal sensing
capability such as mechanical loading (compressive and tensile) and moisture
content. To assess the sensing potential of fungal insoles we undertook
laboratory experiments on electrical response of bespoke insoles made from
capillary matting colonised with oyster fungi Pleurotus ostreatus to
compressive stress which mimics human loading when standing and walking. We
have shown changes in electrical activity with compressive loading. The results
advance the development of intelligent sensing insoles which are a building
block towards more generic reactive fungal wearables. Using FitzhHugh-Nagumo
model we numerically illustrated how excitation wave-fronts behave in a
mycelium network colonising an insole and shown that it may be possible to
discern pressure points from the mycelium electrical activity.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 18:15:28 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Nikolaidou",
"Anna",
""
],
[
"Phillips",
"Neil",
""
],
[
"Tsompanas",
"Michail-Antisthenis",
""
],
[
"Adamatzky",
"Andrew",
""
]
] |
new_dataset
| 0.995154 |
2210.01857
|
Mark Lowell
|
James Mason Inder, Mark Lowell, Andrew J. Maltenfort
|
Centerpoints Are All You Need in Overhead Imagery
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Labeling data to use for training object detectors is expensive and time
consuming. Publicly available overhead datasets for object detection are
labeled with image-aligned bounding boxes, object-aligned bounding boxes, or
object masks, but it is not clear whether such detailed labeling is necessary.
To test the idea, we developed novel single- and two-stage network
architectures that use centerpoints for labeling. In this paper we show that
these architectures achieve nearly equivalent performance to approaches using
more detailed labeling on three overhead object detection datasets.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 18:57:43 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Inder",
"James Mason",
""
],
[
"Lowell",
"Mark",
""
],
[
"Maltenfort",
"Andrew J.",
""
]
] |
new_dataset
| 0.9728 |
2210.01933
|
Peter Eckmann
|
Peter Eckmann, Anita Bandrowski
|
PreprintMatch: a tool for preprint publication detection applied to
analyze global inequities in scientific publishing
|
16 pages, 6 figures
| null | null | null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Preprints, versions of scientific manuscripts that precede peer review, are
growing in popularity. They offer an opportunity to democratize and accelerate
research, as they have no publication costs or a lengthy peer review process.
Preprints are often later published in peer-reviewed venues, but these
publications and the original preprints are frequently not linked in any way.
To this end, we developed a tool, PreprintMatch, to find matches between
preprints and their corresponding published papers, if they exist. This tool
outperforms existing techniques to match preprints and papers, both on matching
performance and speed. PreprintMatch was applied to search for matches between
preprints (from bioRxiv and medRxiv), and PubMed. The preliminary nature of
preprints offers a unique perspective into scientific projects at a relatively
early stage, and with better matching between preprint and paper, we explored
questions related to research inequity. We found that preprints from low income
countries are published as peer-reviewed papers at a lower rate than high
income countries (39.6\% and 61.1\%, respectively), and our data is consistent
with previous work that cite a lack of resources, lack of stability, and policy
choices to explain this discrepancy. Preprints from low income countries were
also found to be published quicker (178 vs 203 days) and with less title,
abstract, and author similarity to the published version compared to high
income countries. Low income countries add more authors from the preprint to
the published version than high income countries (0.42 authors vs 0.32,
respectively), a practice that is significantly more frequent in China compared
to similar countries. Finally, we find that some publishers publish work with
authors from lower income countries more frequently than others. PreprintMatch
is available at \url{https://github.com/PeterEckmann1/preprint-match}.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 22:09:45 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Eckmann",
"Peter",
""
],
[
"Bandrowski",
"Anita",
""
]
] |
new_dataset
| 0.99896 |
2210.02019
|
Matthew Aitchison
|
Matthew Aitchison, Penny Sweetser, Marcus Hutter
|
Atari-5: Distilling the Arcade Learning Environment down to Five Games
| null | null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Arcade Learning Environment (ALE) has become an essential benchmark for
assessing the performance of reinforcement learning algorithms. However, the
computational cost of generating results on the entire 57-game dataset limits
ALE's use and makes the reproducibility of many results infeasible. We propose
a novel solution to this problem in the form of a principled methodology for
selecting small but representative subsets of environments within a benchmark
suite. We applied our method to identify a subset of five ALE games, called
Atari-5, which produces 57-game median score estimates within 10% of their true
values. Extending the subset to 10-games recovers 80% of the variance for
log-scores for all games within the 57-game set. We show this level of
compression is possible due to a high degree of correlation between many of the
games in ALE.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 04:41:20 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Aitchison",
"Matthew",
""
],
[
"Sweetser",
"Penny",
""
],
[
"Hutter",
"Marcus",
""
]
] |
new_dataset
| 0.998794 |
2210.02030
|
Zheng Ding
|
Zheng Ding, James Hou, Zhuowen Tu
|
Point Cloud Recognition with Position-to-Structure Attention
Transformers
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present Position-to-Structure Attention Transformers
(PS-Former), a Transformer-based algorithm for 3D point cloud recognition.
PS-Former deals with the challenge in 3D point cloud representation where
points are not positioned in a fixed grid structure and have limited feature
description (only 3D coordinates ($x, y, z$) for scattered points). Existing
Transformer-based architectures in this domain often require a pre-specified
feature engineering step to extract point features. Here, we introduce two new
aspects in PS-Former: 1) a learnable condensation layer that performs point
downsampling and feature extraction; and 2) a Position-to-Structure Attention
mechanism that recursively enriches the structural information with the
position attention branch. Compared with the competing methods, while being
generic with less heuristics feature designs, PS-Former demonstrates
competitive experimental results on three 3D point cloud tasks including
classification, part segmentation, and scene segmentation.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 05:40:33 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Ding",
"Zheng",
""
],
[
"Hou",
"James",
""
],
[
"Tu",
"Zhuowen",
""
]
] |
new_dataset
| 0.998282 |
2210.02038
|
Hanwei Zhang
|
Hanwei Zhang, Hideaki Uchiyama, Shintaro Ono and Hiroshi Kawasaki
|
MOTSLAM: MOT-assisted monocular dynamic SLAM using single-view depth
estimation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual SLAM systems targeting static scenes have been developed with
satisfactory accuracy and robustness. Dynamic 3D object tracking has then
become a significant capability in visual SLAM with the requirement of
understanding dynamic surroundings in various scenarios including autonomous
driving, augmented and virtual reality. However, performing dynamic SLAM solely
with monocular images remains a challenging problem due to the difficulty of
associating dynamic features and estimating their positions. In this paper, we
present MOTSLAM, a dynamic visual SLAM system with the monocular configuration
that tracks both poses and bounding boxes of dynamic objects. MOTSLAM first
performs multiple object tracking (MOT) with associated both 2D and 3D bounding
box detection to create initial 3D objects. Then, neural-network-based
monocular depth estimation is applied to fetch the depth of dynamic features.
Finally, camera poses, object poses, and both static, as well as dynamic map
points, are jointly optimized using a novel bundle adjustment. Our experiments
on the KITTI dataset demonstrate that our system has reached best performance
on both camera ego-motion and object tracking on monocular dynamic SLAM.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 06:07:10 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Zhang",
"Hanwei",
""
],
[
"Uchiyama",
"Hideaki",
""
],
[
"Ono",
"Shintaro",
""
],
[
"Kawasaki",
"Hiroshi",
""
]
] |
new_dataset
| 0.995079 |
2210.02085
|
Thijs Otter
|
Thijs Otter
|
DooML: A new Database & Object-Oriented Modeling Language for
database-driven web application design and development
|
9 pages. International Journal of Software Engineering & Applications
(IJSEA), Chennai, India (2022)
| null |
10.5121/ijsea.2022.13503
| null |
cs.SE cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
A database driven web application is a very common software solution to
rising business problems. Modeling the database and the software architecture
can be challenging, hence there not being one combined modeling language for
database and software architecture, specifically suited for web application
development. In this paper we present Database object-oriented Modeling
Language (DooML) and its primary Archetype Diagram: a notation for specifying
the design of a database schema and corresponding object-oriented software
architecture. It combines the syntax for drawing Entity Relationship Diagrams,
the Relational Model and Universal Modeling Language Class Diagrams as well to
create a mixed diagram, stating database design as well as software design
specifications. By default, DooML ensures that the approach of advanced web
application development is model-driven and both database-oriented as well as
object-oriented.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 08:24:32 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Otter",
"Thijs",
""
]
] |
new_dataset
| 0.998544 |
2210.02095
|
Andrea Valenti
|
Andrea Valenti, Davide Bacciu, Antonio Vergari
|
ChemAlgebra: Algebraic Reasoning on Chemical Reactions
| null | null | null | null |
cs.LG physics.chem-ph q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
While showing impressive performance on various kinds of learning tasks, it
is yet unclear whether deep learning models have the ability to robustly tackle
reasoning tasks. than by learning the underlying reasoning process that is
actually required to solve the tasks. Measuring the robustness of reasoning in
machine learning models is challenging as one needs to provide a task that
cannot be easily shortcut by exploiting spurious statistical correlations in
the data, while operating on complex objects and constraints. reasoning task.
To address this issue, we propose ChemAlgebra, a benchmark for measuring the
reasoning capabilities of deep learning models through the prediction of
stoichiometrically-balanced chemical reactions. ChemAlgebra requires
manipulating sets of complex discrete objects -- molecules represented as
formulas or graphs -- under algebraic constraints such as the mass preservation
principle. We believe that ChemAlgebra can serve as a useful test bed for the
next generation of machine reasoning models and as a promoter of their
development.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 08:34:44 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Valenti",
"Andrea",
""
],
[
"Bacciu",
"Davide",
""
],
[
"Vergari",
"Antonio",
""
]
] |
new_dataset
| 0.999467 |
2210.02109
|
Wilson Jallet
|
Wilson Jallet (WILLOW, LAAS-GEPETTO), Antoine Bambade (ENPC, WILLOW),
Nicolas Mansard (LAAS-GEPETTO), Justin Carpentier (WILLOW)
|
ProxNLP: a primal-dual augmented Lagrangian solver for nonlinear
programming in Robotics and beyond
|
Workshop paper at the 6th Legged Robots Workshop, at the IEEE
International Conference on Robotics and Automation (ICRA) 2022
|
6th Legged Robots Workshop, May 2022, Philadelphia, Pennsylvania,
United States
| null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mathematical optimization is the workhorse behind several aspects of modern
robotics and control. In these applications, the focus is on constrained
optimization, and the ability to work on manifolds (such as the classical
matrix Lie groups), along with a specific requirement for robustness and speed.
In recent years, augmented Lagrangian methods have seen a resurgence due to
their robustness and flexibility, their connections to (inexact) proximal-point
methods, and their interoperability with Newton or semismooth Newton methods.
In the sequel, we present primal-dual augmented Lagrangian method for
inequality-constrained problems on manifolds, which we introduced in our recent
work, as well as an efficient C++ implementation suitable for use in robotics
applications and beyond.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 09:18:51 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Jallet",
"Wilson",
"",
"WILLOW, LAAS-GEPETTO"
],
[
"Bambade",
"Antoine",
"",
"ENPC, WILLOW"
],
[
"Mansard",
"Nicolas",
"",
"LAAS-GEPETTO"
],
[
"Carpentier",
"Justin",
"",
"WILLOW"
]
] |
new_dataset
| 0.989764 |
2210.02165
|
Pierpaolo Vivo
|
Evan Tzanis, Pierpaolo Vivo, Yanik-Pascal F\"orster, Luca Gamberi,
Alessia Annibale
|
Graphie: A network-based visual interface for UK's Primary Legislation
|
15 pages, 12 figures
| null | null | null |
cs.CY physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Graphie, a novel navigational interface to visualize Acts and
Bills included in the UK's legislation digital repository [legislation.gov.uk].
Graphie provides a network representation of the hierarchical structure of an
Act of Parliament, which is typically organized in a tree-like fashion
according to the content and information contained in each sub-branch. Nodes in
Graphie represent sections of an Act (or individual provisions), while links
embody the hierarchical connections between them. The legal map provided by
Graphie is easily navigable by hovering on nodes, which are also color-coded
and numbered to provide easily accessible information about the underlying
content. The full textual content of each node is also available on a dedicated
hyperlinked canvas. The building block of Graphie is Sofia, an offline data
pipeline designed to support different data visualizations by parsing and
modelling data provided by [legislation.gov.uk] in open access form. While we
focus on the Housing Act 2004 for illustrative purposes, our platform is
scalable, versatile, and provides users with a unified toolbox to visualize and
explore the UK legal corpus in a fast and user-friendly way.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 11:47:20 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Tzanis",
"Evan",
""
],
[
"Vivo",
"Pierpaolo",
""
],
[
"Förster",
"Yanik-Pascal",
""
],
[
"Gamberi",
"Luca",
""
],
[
"Annibale",
"Alessia",
""
]
] |
new_dataset
| 0.996288 |
2210.02199
|
Peiwang Tang
|
Peiwang Tang and Xianchao Zhang
|
MTSMAE: Masked Autoencoders for Multivariate Time-Series Forecasting
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Large-scale self-supervised pre-training Transformer architecture have
significantly boosted the performance for various tasks in natural language
processing (NLP) and computer vision (CV). However, there is a lack of
researches on processing multivariate time-series by pre-trained Transformer,
and especially, current study on masking time-series for self-supervised
learning is still a gap. Different from language and image processing, the
information density of time-series increases the difficulty of research. The
challenge goes further with the invalidity of the previous patch embedding and
mask methods. In this paper, according to the data characteristics of
multivariate time-series, a patch embedding method is proposed, and we present
an self-supervised pre-training approach based on Masked Autoencoders (MAE),
called MTSMAE, which can improve the performance significantly over supervised
learning without pre-training. Evaluating our method on several common
multivariate time-series datasets from different fields and with different
characteristics, experiment results demonstrate that the performance of our
method is significantly better than the best method currently available.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 03:06:21 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Tang",
"Peiwang",
""
],
[
"Zhang",
"Xianchao",
""
]
] |
new_dataset
| 0.986512 |
2210.02231
|
Yue Zhu
|
Yue Zhu, David Picard
|
Decanus to Legatus: Synthetic training for 2D-3D human pose lifting
|
Accepted by ACCV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D human pose estimation is a challenging task because of the difficulty to
acquire ground-truth data outside of controlled environments. A number of
further issues have been hindering progress in building a universal and robust
model for this task, including domain gaps between different datasets, unseen
actions between train and test datasets, various hardware settings and high
cost of annotation, etc. In this paper, we propose an algorithm to generate
infinite 3D synthetic human poses (Legatus) from a 3D pose distribution based
on 10 initial handcrafted 3D poses (Decanus) during the training of a 2D to 3D
human pose lifter neural network. Our results show that we can achieve 3D pose
estimation performance comparable to methods using real data from specialized
datasets but in a zero-shot setup, showing the generalization potential of our
framework.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 13:10:19 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Zhu",
"Yue",
""
],
[
"Picard",
"David",
""
]
] |
new_dataset
| 0.999708 |
2210.02287
|
Luyuan Xie
|
Luyuan Xie, Yan Zhong, Lin Yang, Zhaoyu Yan, Zhonghai Wu, Junjie Wang
|
TC-SKNet with GridMask for Low-complexity Classification of Acoustic
scene
|
Accepted to APSIPA ASC 2022
| null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolution neural networks (CNNs) have good performance in low-complexity
classification tasks such as acoustic scene classifications (ASCs). However,
there are few studies on the relationship between the length of target speech
and the size of the convolution kernels. In this paper, we combine Selective
Kernel Network with Temporal-Convolution (TC-SKNet) to adjust the receptive
field of convolution kernels to solve the problem of variable length of target
voice while keeping low-complexity. GridMask is a data augmentation strategy by
masking part of the raw data or feature area. It can enhance the generalization
of the model as the role of dropout. In our experiments, the performance gain
brought by GridMask is stronger than spectrum augmentation in ASCs. Finally, we
adopt AutoML to search best structure of TC-SKNet and hyperparameters of
GridMask for improving the classification performance. As a result, a peak
accuracy of 59.87% TC-SKNet is equivalent to that of SOTA, but the parameters
only use 20.9 K.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 14:24:17 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Xie",
"Luyuan",
""
],
[
"Zhong",
"Yan",
""
],
[
"Yang",
"Lin",
""
],
[
"Yan",
"Zhaoyu",
""
],
[
"Wu",
"Zhonghai",
""
],
[
"Wang",
"Junjie",
""
]
] |
new_dataset
| 0.965483 |
2210.02374
|
Vinod Grover
|
Alexander Collins, Vinod Grover
|
Axon: A Language for Dynamic Shapes in Deep Learning Graphs
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Axon is a language that enables shape and rank inference for tensors in a
Deep Learning graphs. It aims to make shapes implicit and inferred, in a
similar manner to how types are implicit and inferred in many functional
programming languages. Tensor dimensions are represented by expressions
consisting of symbolic variables, constants, and arithmetic operators. Tensor
shapes can be expressed as either a sequence of these dimension expressions, as
a symbolic variable, or as an appending of other shapes. This allows complex
constraints on shapes to be expressed. Axon is functional in style, with a type
system similar in to Standard ML, extended to include shape information. It
provides a suite of built in operators over tensors, including pointwise
arithmetic operators, maps, reduction, loops and user defined functions. We
describe a shape inference algorithm based on constraint solving which infers
information about shapes, from both shape information provided by the
programmer and the structure of the program. This allows fully automatic
inference of the shapes of tensors for complex Deep Learning graphs. This
approach reduces programmer effort when specifying graphs, as tensor shapes are
not explicit, allows composition of Deep Learning graphs while maintaining
input and output tensor shape compatibility, and aids in automated error
detection by identifying shape mismatches at runtime.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 16:34:15 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Collins",
"Alexander",
""
],
[
"Grover",
"Vinod",
""
]
] |
new_dataset
| 0.9997 |
2210.02399
|
Ruben Villegas
|
Ruben Villegas, Mohammad Babaeizadeh, Pieter-Jan Kindermans, Hernan
Moraldo, Han Zhang, Mohammad Taghi Saffar, Santiago Castro, Julius Kunze,
Dumitru Erhan
|
Phenaki: Variable Length Video Generation From Open Domain Textual
Description
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Phenaki, a model capable of realistic video synthesis, given a
sequence of textual prompts. Generating videos from text is particularly
challenging due to the computational cost, limited quantities of high quality
text-video data and variable length of videos. To address these issues, we
introduce a new model for learning video representation which compresses the
video to a small representation of discrete tokens. This tokenizer uses causal
attention in time, which allows it to work with variable-length videos. To
generate video tokens from text we are using a bidirectional masked transformer
conditioned on pre-computed text tokens. The generated video tokens are
subsequently de-tokenized to create the actual video. To address data issues,
we demonstrate how joint training on a large corpus of image-text pairs as well
as a smaller number of video-text examples can result in generalization beyond
what is available in the video datasets. Compared to the previous video
generation methods, Phenaki can generate arbitrary long videos conditioned on a
sequence of prompts (i.e. time variable text or a story) in open domain. To the
best of our knowledge, this is the first time a paper studies generating videos
from time variable prompts. In addition, compared to the per-frame baselines,
the proposed video encoder-decoder computes fewer tokens per video but results
in better spatio-temporal consistency.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 17:18:28 GMT"
}
] | 2022-10-06T00:00:00 |
[
[
"Villegas",
"Ruben",
""
],
[
"Babaeizadeh",
"Mohammad",
""
],
[
"Kindermans",
"Pieter-Jan",
""
],
[
"Moraldo",
"Hernan",
""
],
[
"Zhang",
"Han",
""
],
[
"Saffar",
"Mohammad Taghi",
""
],
[
"Castro",
"Santiago",
""
],
[
"Kunze",
"Julius",
""
],
[
"Erhan",
"Dumitru",
""
]
] |
new_dataset
| 0.987123 |
2007.04074
|
Matthias Feurer
|
Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius
Lindauer and Frank Hutter
|
Auto-Sklearn 2.0: Hands-free AutoML via Meta-Learning
|
Final version as published at JMLR 23(261)
|
Journal of Machine Learning Research 23(261), 2022
| null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Automated Machine Learning (AutoML) supports practitioners and researchers
with the tedious task of designing machine learning pipelines and has recently
achieved substantial success. In this paper, we introduce new AutoML approaches
motivated by our winning submission to the second ChaLearn AutoML challenge. We
develop PoSH Auto-sklearn, which enables AutoML systems to work well on large
datasets under rigid time limits by using a new, simple and meta-feature-free
meta-learning technique and by employing a successful bandit strategy for
budget allocation. However, PoSH Auto-sklearn introduces even more ways of
running AutoML and might make it harder for users to set it up correctly.
Therefore, we also go one step further and study the design space of AutoML
itself, proposing a solution towards truly hands-free AutoML. Together, these
changes give rise to the next generation of our AutoML system, Auto-sklearn
2.0. We verify the improvements by these additions in an extensive experimental
study on 39 AutoML benchmark datasets. We conclude the paper by comparing to
other popular AutoML frameworks and Auto-sklearn 1.0, reducing the relative
error by up to a factor of 4.5, and yielding a performance in 10 minutes that
is substantially better than what Auto-sklearn 1.0 achieves within an hour.
|
[
{
"version": "v1",
"created": "Wed, 8 Jul 2020 12:41:03 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Sep 2021 08:09:35 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Oct 2022 12:18:34 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Feurer",
"Matthias",
""
],
[
"Eggensperger",
"Katharina",
""
],
[
"Falkner",
"Stefan",
""
],
[
"Lindauer",
"Marius",
""
],
[
"Hutter",
"Frank",
""
]
] |
new_dataset
| 0.996592 |
2101.00086
|
Emanuele Guidotti
|
Emanuele Guidotti
|
calculus: High Dimensional Numerical and Symbolic Calculus in R
| null |
Journal of Statistical Software (2022), 104(5), 1-37
|
10.18637/jss.v104.i05
| null |
cs.MS cs.NA math.NA
|
http://creativecommons.org/licenses/by/4.0/
|
The R package calculus implements C++ optimized functions for numerical and
symbolic calculus, such as the Einstein summing convention, fast computation of
the Levi-Civita symbol and generalized Kronecker delta, Taylor series
expansion, multivariate Hermite polynomials, high-order derivatives, ordinary
differential equations, differential operators and numerical integration in
arbitrary orthogonal coordinate systems. The library applies numerical methods
when working with R functions or symbolic programming when working with
characters or expressions. The package handles multivariate numerical calculus
in arbitrary dimensions and coordinates and implements the symbolic counterpart
of the numerical methods whenever possible, without depending on external
computer algebra systems. Except for Rcpp, the package has no strict
dependencies in order to provide a stable self-contained toolbox that invites
re-use.
|
[
{
"version": "v1",
"created": "Thu, 31 Dec 2020 21:52:19 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Guidotti",
"Emanuele",
""
]
] |
new_dataset
| 0.993576 |
2102.07981
|
Mingbao Lin
|
Mingbao Lin, Rongrong Ji, Zihan Xu, Baochang Zhang, Fei Chao, Chia-Wen
Lin, Ling Shao
|
SiMaN: Sign-to-Magnitude Network Binarization
|
Accepted by IEEE TPAMI, 2022
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Binary neural networks (BNNs) have attracted broad research interest due to
their efficient storage and computational ability. Nevertheless, a significant
challenge of BNNs lies in handling discrete constraints while ensuring bit
entropy maximization, which typically makes their weight optimization very
difficult. Existing methods relax the learning using the sign function, which
simply encodes positive weights into +1s, and -1s otherwise. Alternatively, we
formulate an angle alignment objective to constrain the weight binarization to
{0,+1} to solve the challenge. In this paper, we show that our weight
binarization provides an analytical solution by encoding high-magnitude weights
into +1s, and 0s otherwise. Therefore, a high-quality discrete solution is
established in a computationally efficient manner without the sign function. We
prove that the learned weights of binarized networks roughly follow a Laplacian
distribution that does not allow entropy maximization, and further demonstrate
that it can be effectively solved by simply removing the $\ell_2$
regularization during network training. Our method, dubbed sign-to-magnitude
network binarization (SiMaN), is evaluated on CIFAR-10 and ImageNet,
demonstrating its superiority over the sign-based state-of-the-arts. Our source
code, experimental settings, training logs and binary models are available at
https://github.com/lmbxmu/SiMaN.
|
[
{
"version": "v1",
"created": "Tue, 16 Feb 2021 07:03:51 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Mar 2021 12:51:21 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Oct 2022 17:24:54 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Lin",
"Mingbao",
""
],
[
"Ji",
"Rongrong",
""
],
[
"Xu",
"Zihan",
""
],
[
"Zhang",
"Baochang",
""
],
[
"Chao",
"Fei",
""
],
[
"Lin",
"Chia-Wen",
""
],
[
"Shao",
"Ling",
""
]
] |
new_dataset
| 0.995039 |
2109.08682
|
Ana Enriquez
|
Ana Enriquez
|
Section 108 and Software Collections, A User's Guide
|
Software Preservation Network. Zenodo (2022)
| null |
10.5281/zenodo.6949233
| null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This user's guide explains Section 108 of the U.S. Copyright Act, a set of
rights for libraries and archives, in the context of software collections. It
also addresses the interaction between Section 108 and fair use (Section 107)
in this context. The guide will help library and archives workers who preserve
and provide access to software collections to navigate U.S. copyright law.
|
[
{
"version": "v1",
"created": "Fri, 17 Sep 2021 14:05:50 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Enriquez",
"Ana",
""
]
] |
new_dataset
| 0.999051 |
2110.04075
|
Abdelrahman Abdallah
|
Nazgul Toiganbayeva, Mahmoud Kasem, Galymzhan Abdimanap, Kairat
Bostanbekov, Abdelrahman Abdallah, Anel Alimova, Daniyar Nurseitov
|
KOHTD: Kazakh Offline Handwritten Text Dataset
| null |
Signal Processing: Image Communication, Volume 108, October 2022
|
10.1016/j.image.2022.116827
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Despite the transition to digital information exchange, many documents, such
as invoices, taxes, memos and questionnaires, historical data, and answers to
exam questions, still require handwritten inputs. In this regard, there is a
need to implement Handwritten Text Recognition (HTR) which is an automatic way
to decrypt records using a computer. Handwriting recognition is challenging
because of the virtually infinite number of ways a person can write the same
message. For this proposal we introduce Kazakh handwritten text recognition
research, a comprehensive dataset of Kazakh handwritten texts is necessary.
This is particularly true given the lack of a dataset for handwritten Kazakh
text. In this paper, we proposed our extensive Kazakh offline Handwritten Text
dataset (KOHTD), which has 3000 handwritten exam papers and more than 140335
segmented images and there are approximately 922010 symbols. It can serve
researchers in the field of handwriting recognition tasks by using deep and
machine learning. We used a variety of popular text recognition methods for
word and line recognition in our studies, including CTC-based and
attention-based methods. The findings demonstrate KOHTD's diversity. Also, we
proposed a Genetic Algorithm (GA) for line and word segmentation based on
random enumeration of a parameter. The dataset and GA code are available at
https://github.com/abdoelsayed2016/KOHTD.
|
[
{
"version": "v1",
"created": "Wed, 22 Sep 2021 16:19:38 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Toiganbayeva",
"Nazgul",
""
],
[
"Kasem",
"Mahmoud",
""
],
[
"Abdimanap",
"Galymzhan",
""
],
[
"Bostanbekov",
"Kairat",
""
],
[
"Abdallah",
"Abdelrahman",
""
],
[
"Alimova",
"Anel",
""
],
[
"Nurseitov",
"Daniyar",
""
]
] |
new_dataset
| 0.99986 |
2110.04494
|
Baoquan Zhang
|
Baoquan Zhang, Shanshan Feng, Xutao Li, Yunming Ye, and Rui Ye
|
SGMNet: Scene Graph Matching Network for Few-Shot Remote Sensing Scene
Classification
|
13 pages
| null |
10.1109/TGRS.2022.3200056
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Few-Shot Remote Sensing Scene Classification (FSRSSC) is an important task,
which aims to recognize novel scene classes with few examples. Recently,
several studies attempt to address the FSRSSC problem by following few-shot
natural image classification methods. These existing methods have made
promising progress and achieved superior performance. However, they all
overlook two unique characteristics of remote sensing images: (i) object
co-occurrence that multiple objects tend to appear together in a scene image
and (ii) object spatial correlation that these co-occurrence objects are
distributed in the scene image following some spatial structure patterns. Such
unique characteristics are very beneficial for FSRSSC, which can effectively
alleviate the scarcity issue of labeled remote sensing images since they can
provide more refined descriptions for each scene class. To fully exploit these
characteristics, we propose a novel scene graph matching-based meta-learning
framework for FSRSSC, called SGMNet. In this framework, a scene graph
construction module is carefully designed to represent each test remote sensing
image or each scene class as a scene graph, where the nodes reflect these
co-occurrence objects meanwhile the edges capture the spatial correlations
between these co-occurrence objects. Then, a scene graph matching module is
further developed to evaluate the similarity score between each test remote
sensing image and each scene class. Finally, based on the similarity scores, we
perform the scene class prediction via a nearest neighbor classifier. We
conduct extensive experiments on UCMerced LandUse, WHU19, AID, and
NWPU-RESISC45 datasets. The experimental results show that our method obtains
superior performance over the previous state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sat, 9 Oct 2021 07:43:40 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Zhang",
"Baoquan",
""
],
[
"Feng",
"Shanshan",
""
],
[
"Li",
"Xutao",
""
],
[
"Ye",
"Yunming",
""
],
[
"Ye",
"Rui",
""
]
] |
new_dataset
| 0.999307 |
2110.06253
|
Roberto Natella
|
Roberto Natella
|
StateAFL: Greybox Fuzzing for Stateful Network Servers
|
The tool is available at https://github.com/stateafl/stateafl
|
Empir Software Eng 27, 191 (2022)
|
10.1007/s10664-022-10233-3
| null |
cs.CR cs.OS cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fuzzing network servers is a technical challenge, since the behavior of the
target server depends on its state over a sequence of multiple messages.
Existing solutions are costly and difficult to use, as they rely on
manually-customized artifacts such as protocol models, protocol parsers, and
learning frameworks. The aim of this work is to develop a greybox fuzzer
(StateaAFL) for network servers that only relies on lightweight analysis of the
target program, with no manual customization, in a similar way to what the AFL
fuzzer achieved for stateless programs. The proposed fuzzer instruments the
target server at compile-time, to insert probes on memory allocations and
network I/O operations. At run-time, it infers the current protocol state of
the target server by taking snapshots of long-lived memory areas, and by
applying a fuzzy hashing algorithm (Locality-Sensitive Hashing) to map memory
contents to a unique state identifier. The fuzzer incrementally builds a
protocol state machine for guiding fuzzing.
We implemented and released StateaAFL as open-source software. As a basis for
reproducible experimentation, we integrated StateaAFL with a large set of
network servers for popular protocols, with no manual customization to
accomodate for the protocol. The experimental results show that the fuzzer can
be applied with no manual customization on a large set of network servers for
popular protocols, and that it can achieve comparable, or even better code
coverage and bug detection than customized fuzzing. Moreover, our qualitative
analysis shows that states inferred from memory better reflect the server
behavior than only using response codes from messages.
|
[
{
"version": "v1",
"created": "Tue, 12 Oct 2021 18:08:38 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Oct 2022 12:18:24 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Natella",
"Roberto",
""
]
] |
new_dataset
| 0.971006 |
2111.02735
|
Yingzhi Wang
|
Yingzhi Wang, Abdelmoumene Boumadane and Abdelwahab Heba
|
A Fine-tuned Wav2vec 2.0/HuBERT Benchmark For Speech Emotion
Recognition, Speaker Verification and Spoken Language Understanding
|
7 pages, 2 figures
| null | null | null |
cs.CL cs.NE cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Speech self-supervised models such as wav2vec 2.0 and HuBERT are making
revolutionary progress in Automatic Speech Recognition (ASR). However, they
have not been totally proven to produce better performance on tasks other than
ASR. In this work, we explored partial fine-tuning and entire fine-tuning on
wav2vec 2.0 and HuBERT pre-trained models for three non-ASR speech tasks:
Speech Emotion Recognition, Speaker Verification and Spoken Language
Understanding. With simple proposed downstream frameworks, the best scores
reached 79.58% weighted accuracy on speaker-dependent setting and 73.01%
weighted accuracy on speaker-independent setting for Speech Emotion Recognition
on IEMOCAP, 2.36% equal error rate for Speaker Verification on VoxCeleb1,
89.38% accuracy for Intent Classification and 78.92% F1 for Slot Filling on
SLURP, showing the strength of fine-tuned wav2vec 2.0 and HuBERT on learning
prosodic, voice-print and semantic representations.
|
[
{
"version": "v1",
"created": "Thu, 4 Nov 2021 10:39:06 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Apr 2022 12:59:44 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Oct 2022 20:50:54 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Wang",
"Yingzhi",
""
],
[
"Boumadane",
"Abdelmoumene",
""
],
[
"Heba",
"Abdelwahab",
""
]
] |
new_dataset
| 0.99958 |
2111.11707
|
Nankai Lin
|
Ru Peng and Nankai Lin and Yi Fang and Shengyi Jiang and Tianyong Hao
and Boyu Chen and Junbo Zhao
|
Deps-SAN: Neural Machine Translation with Dependency-Scaled
Self-Attention Network
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Syntax knowledge contributes its powerful strength in Neural machine
translation (NMT) tasks. Early NMT works supposed that syntax details can be
automatically learned from numerous texts via attention networks. However,
succeeding researches pointed out that limited by the uncontrolled nature of
attention computation, the NMT model requires an external syntax to capture the
deep syntactic awareness. Although existing syntax-aware NMT methods have born
great fruits in combining syntax, the additional workloads they introduced
render the model heavy and slow. Particularly, these efforts scarcely involve
the Transformer-based NMT and modify its core self-attention network (SAN). To
this end, we propose a parameter-free, Dependency-scaled Self-Attention Network
(Deps-SAN) for syntax-aware Transformer-based NMT. A quantified matrix of
dependency closeness between tokens is constructed to impose explicit syntactic
constraints into the SAN for learning syntactic details and dispelling the
dispersion of attention distributions. Two knowledge sparsing techniques are
further integrated to avoid the model overfitting the dependency noises
introduced by the external parser. Experiments and analyses on IWSLT14
German-to-English and WMT16 German-to-English benchmark NMT tasks verify the
effectiveness of our approach.
|
[
{
"version": "v1",
"created": "Tue, 23 Nov 2021 08:01:21 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Nov 2021 03:38:31 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Jun 2022 07:48:12 GMT"
},
{
"version": "v4",
"created": "Thu, 30 Jun 2022 06:52:54 GMT"
},
{
"version": "v5",
"created": "Tue, 4 Oct 2022 07:29:31 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Peng",
"Ru",
""
],
[
"Lin",
"Nankai",
""
],
[
"Fang",
"Yi",
""
],
[
"Jiang",
"Shengyi",
""
],
[
"Hao",
"Tianyong",
""
],
[
"Chen",
"Boyu",
""
],
[
"Zhao",
"Junbo",
""
]
] |
new_dataset
| 0.99043 |
2112.02447
|
Fantine Huot
|
Fantine Huot, R. Lily Hu, Nita Goyal, Tharun Sankar, Matthias Ihme,
Yi-Fan Chen
|
Next Day Wildfire Spread: A Machine Learning Data Set to Predict
Wildfire Spreading from Remote-Sensing Data
|
submitted to IEEE Transactions on Geoscience and Remote Sensing
| null |
10.1109/TGRS.2022.3192974
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Predicting wildfire spread is critical for land management and disaster
preparedness. To this end, we present `Next Day Wildfire Spread,' a curated,
large-scale, multivariate data set of historical wildfires aggregating nearly a
decade of remote-sensing data across the United States. In contrast to existing
fire data sets based on Earth observation satellites, our data set combines 2D
fire data with multiple explanatory variables (e.g., topography, vegetation,
weather, drought index, population density) aligned over 2D regions, providing
a feature-rich data set for machine learning. To demonstrate the usefulness of
this data set, we implement a neural network that takes advantage of the
spatial information of this data to predict wildfire spread. We compare the
performance of the neural network with other machine learning models: logistic
regression and random forest. This data set can be used as a benchmark for
developing wildfire propagation models based on remote sensing data for a lead
time of one day.
|
[
{
"version": "v1",
"created": "Sat, 4 Dec 2021 23:28:44 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Mar 2022 20:59:51 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Huot",
"Fantine",
""
],
[
"Hu",
"R. Lily",
""
],
[
"Goyal",
"Nita",
""
],
[
"Sankar",
"Tharun",
""
],
[
"Ihme",
"Matthias",
""
],
[
"Chen",
"Yi-Fan",
""
]
] |
new_dataset
| 0.999198 |
2201.06750
|
Ying Wang
|
Ying Wang, Yuexing Peng, Xinran Liu, Wei Li, George C.
Alexandropoulos, Junchuan Yu, Daqing Ge, Wei Xiang
|
DDU-Net: Dual-Decoder-U-Net for Road Extraction Using High-Resolution
Remote Sensing Images
| null | null |
10.1109/TGRS.2022.3197546
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Extracting roads from high-resolution remote sensing images (HRSIs) is vital
in a wide variety of applications, such as autonomous driving, path planning,
and road navigation. Due to the long and thin shape as well as the shades
induced by vegetation and buildings, small-sized roads are more difficult to
discern. In order to improve the reliability and accuracy of small-sized road
extraction when roads of multiple sizes coexist in an HRSI, an enhanced deep
neural network model termed Dual-Decoder-U-Net (DDU-Net) is proposed in this
paper. Motivated by the U-Net model, a small decoder is added to form a
dual-decoder structure for more detailed features. In addition, we introduce
the dilated convolution attention module (DCAM) between the encoder and
decoders to increase the receptive field as well as to distill multi-scale
features through cascading dilated convolution and global average pooling. The
convolutional block attention module (CBAM) is also embedded in the parallel
dilated convolution and pooling branches to capture more attention-aware
features. Extensive experiments are conducted on the Massachusetts Roads
dataset with experimental results showing that the proposed model outperforms
the state-of-the-art DenseUNet, DeepLabv3+ and D-LinkNet by 6.5%, 3.3%, and
2.1% in the mean Intersection over Union (mIoU), and by 4%, 4.8%, and 3.1% in
the F1 score, respectively. Both ablation and heatmap analyses are presented to
validate the effectiveness of the proposed model.
|
[
{
"version": "v1",
"created": "Tue, 18 Jan 2022 05:27:49 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Wang",
"Ying",
""
],
[
"Peng",
"Yuexing",
""
],
[
"Liu",
"Xinran",
""
],
[
"Li",
"Wei",
""
],
[
"Alexandropoulos",
"George C.",
""
],
[
"Yu",
"Junchuan",
""
],
[
"Ge",
"Daqing",
""
],
[
"Xiang",
"Wei",
""
]
] |
new_dataset
| 0.996435 |
2203.00337
|
Wei Cheah
|
Wei Cheah, Keir Groves, Horatio Martin, Harriet Peel, Simon Watson,
Ognjen Marjanovic and Barry Lennox
|
MIRRAX: A Reconfigurable Robot for Limited Access Environments
|
12 pages, Accepted for IEEE Transactions on Robotics
| null |
10.1109/TRO.2022.3207095
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The development of mobile robot platforms for inspection has gained traction
in recent years with the rapid advancement in hardware and software. However,
conventional mobile robots are unable to address the challenge of operating in
extreme environments where the robot is required to traverse narrow gaps in
highly cluttered areas with restricted access. This paper presents MIRRAX, a
robot that has been designed to meet these challenges with the capability of
re-configuring itself to both access restricted environments through narrow
ports and navigate through tightly spaced obstacles. Controllers for the robot
are detailed, along with an analysis on the controllability of the robot given
the use of Mecanum wheels in a variable configuration. Characterisation on the
robot's performance identified suitable configurations for operating in narrow
environments. The minimum lateral footprint width achievable for stable
configuration ($<2^\text{o}$~roll) was 0.19~m. Experimental validation of the
robot's controllability shows good agreement with the theoretical analysis. A
further series of experiments shows the feasibility of the robot in addressing
the challenges above: the capability to reconfigure itself for restricted entry
through ports as small as 150mm diameter, and navigating through cluttered
environments. The paper also presents results from a deployment in a Magnox
facility at the Sellafield nuclear site in the UK - the first robot to ever do
so, for remote inspection and mapping.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 10:23:33 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Oct 2022 20:48:17 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Cheah",
"Wei",
""
],
[
"Groves",
"Keir",
""
],
[
"Martin",
"Horatio",
""
],
[
"Peel",
"Harriet",
""
],
[
"Watson",
"Simon",
""
],
[
"Marjanovic",
"Ognjen",
""
],
[
"Lennox",
"Barry",
""
]
] |
new_dataset
| 0.9998 |
2203.14733
|
Juan M. Gandarias
|
Luca Fortini (1)(2), Mattia Leonori (1), Juan M. Gandarias (1), Elena
De Momi (2), Arash Ajoudani (1) ((1) Human-Robot Interfaces and Physical
Interaction, Istituto Italiano di Tecnologia, (2) Department of Electronics,
Information and Bioengineering, Politecnico di Milano)
|
Open-VICO: An Open-Source Gazebo Toolkit for Vision-based Skeleton
Tracking in Human-Robot Collaboration
|
7 pages, 8 figures. The final version of this preprint has been
published at IEEE International Conference on Robot & Human Interactive
Communication. DOI: 10.1109/RO-MAN53752.2022.9900851. Code:
https://gitlab.iit.it/hrii-public/open-vico
| null |
10.1109/RO-MAN53752.2022.9900851.
| null |
cs.RO cs.CV cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Simulation tools are essential for robotics research, especially for those
domains in which safety is crucial, such as Human-Robot Collaboration (HRC).
However, it is challenging to simulate human behaviors, and existing robotics
simulators do not integrate functional human models. This work presents
Open-VICO, an open-source toolkit to integrate virtual human models in Gazebo
focusing on vision-based human tracking. In particular, Open-VICO allows to
combine in the same simulation environment realistic human kinematic models,
multi-camera vision setups, and human-tracking techniques along with numerous
robot and sensor models thanks to Gazebo. The possibility to incorporate
pre-recorded human skeleton motion with Motion Capture systems broadens the
landscape of human performance behavioral analysis within Human-Robot
Interaction (HRI) settings. To describe the functionalities and stress the
potential of the toolkit four specific examples, chosen among relevant
literature challenges in the field, are developed using our simulation utils:
i) 3D multi-RGB-D camera calibration in simulation, ii) creation of a synthetic
human skeleton tracking dataset based on OpenPose, iii) multi-camera scenario
for human skeleton tracking in simulation, and iv) a human-robot interaction
example. The key of this work is to create a straightforward pipeline which we
hope will motivate research on new vision-based algorithms and methodologies
for lightweight human-tracking and flexible human-robot applications.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 13:21:32 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Oct 2022 08:11:03 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Fortini",
"Luca",
""
],
[
"Leonori",
"Mattia",
""
],
[
"Gandarias",
"Juan M.",
""
],
[
"De Momi",
"Elena",
""
],
[
"Ajoudani",
"Arash",
""
]
] |
new_dataset
| 0.999476 |
2204.00840
|
Siyang Wen
|
Siyang Wen, Wei Guo, Yi Liu and Ruijie Wu
|
Rotated Object Detection via Scale-invariant Mahalanobis Distance in
Aerial Images
|
5 pages, 7 figures
| null |
10.1109/LGRS.2022.3197617
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rotated object detection in aerial images is a meaningful yet challenging
task as objects are densely arranged and have arbitrary orientations. The
eight-parameter (coordinates of box vectors) methods in rotated object
detection usually use ln-norm losses (L1 loss, L2 loss, and smooth L1 loss) as
loss functions. As ln-norm losses are mainly based on non-scale-invariant
Minkowski distance, using ln-norm losses will lead to inconsistency with the
detection metric rotational Intersection-over-Union (IoU) and training
instability. To address the problems, we use Mahalanobis distance to calculate
loss between the predicted and the target box vertices' vectors, proposing a
new loss function called Mahalanobis Distance Loss (MDL) for eight-parameter
rotated object detection. As Mahalanobis distance is scale-invariant, MDL is
more consistent with detection metric and more stable during training than
ln-norm losses. To alleviate the problem of boundary discontinuity like all
other eight-parameter methods, we further take the minimum loss value to make
MDL continuous at boundary cases. We achieve state-of-art performance on
DOTA-v1.0 with the proposed method MDL. Furthermore, compared to the experiment
that uses smooth L1 loss, we find that MDL performs better in rotated object
detection.
|
[
{
"version": "v1",
"created": "Sat, 2 Apr 2022 11:21:39 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Apr 2022 10:07:41 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Wen",
"Siyang",
""
],
[
"Guo",
"Wei",
""
],
[
"Liu",
"Yi",
""
],
[
"Wu",
"Ruijie",
""
]
] |
new_dataset
| 0.996833 |
2204.04413
|
Xiaochen Liu
|
Xiaochen Liu, Yang Gao, Yu Bai, Jiawei Li, Yinan Hu, Heyan Huang,
Boxing Chen
|
PSP: Pre-trained Soft Prompts for Few-Shot Abstractive Summarization
|
Accepted as a long paper in COLING 2022
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Few-shot abstractive summarization has become a challenging task in natural
language generation. To support it, we designed a novel soft prompts
architecture coupled with a prompt pre-training plus fine-tuning paradigm that
is effective and tunes only extremely light parameters. The soft prompts
include continuous input embeddings across an encoder and a decoder to fit the
structure of the generation models. Importantly, a novel inner-prompt placed in
the text is introduced to capture document-level information. The aim is to
devote attention to understanding the document that better prompts the model to
generate document-related content. The first step in the summarization
procedure is to conduct prompt pre-training with self-supervised pseudo-data.
This teaches the model basic summarizing capabilities. The model is then
fine-tuned with few-shot examples. Experimental results on the CNN/DailyMail
and XSum datasets show that our method, with only 0.1% of the parameters,
outperforms full-model tuning where all model parameters are tuned. It also
surpasses Prompt Tuning by a large margin and delivers competitive results
against Prefix-Tuning with 3% of the parameters.
|
[
{
"version": "v1",
"created": "Sat, 9 Apr 2022 07:40:52 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Oct 2022 07:56:50 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Liu",
"Xiaochen",
""
],
[
"Gao",
"Yang",
""
],
[
"Bai",
"Yu",
""
],
[
"Li",
"Jiawei",
""
],
[
"Hu",
"Yinan",
""
],
[
"Huang",
"Heyan",
""
],
[
"Chen",
"Boxing",
""
]
] |
new_dataset
| 0.958648 |
2205.02069
|
Yuan Zhou
|
Yuan Zhou and Keran Chen and Xiaofeng Li
|
Dual Branch Neural Network for Sea Fog Detection in Geostationary Ocean
Color Imager
| null | null |
10.1109/TGRS.2022.3196177
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sea fog significantly threatens the safety of maritime activities. This paper
develops a sea fog dataset (SFDD) and a dual branch sea fog detection network
(DB-SFNet). We investigate all the observed sea fog events in the Yellow Sea
and the Bohai Sea (118.1{\deg}E-128.1{\deg}E, 29.5{\deg}N-43.8{\deg}N) from
2010 to 2020, and collect the sea fog images for each event from the
Geostationary Ocean Color Imager (GOCI) to comprise the dataset SFDD. The
location of the sea fog in each image in SFDD is accurately marked. The
proposed dataset is characterized by a long-time span, large number of samples,
and accurate labeling, that can substantially improve the robustness of various
sea fog detection models. Furthermore, this paper proposes a dual branch sea
fog detection network to achieve accurate and holistic sea fog detection. The
poporsed DB-SFNet is composed of a knowledge extraction module and a dual
branch optional encoding decoding module. The two modules jointly extracts
discriminative features from both visual and statistical domain. Experiments
show promising sea fog detection results with an F1-score of 0.77 and a
critical success index of 0.63. Compared with existing advanced deep learning
networks, DB-SFNet is superior in detection performance and stability,
particularly in the mixed cloud and fog areas.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 14:01:38 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Zhou",
"Yuan",
""
],
[
"Chen",
"Keran",
""
],
[
"Li",
"Xiaofeng",
""
]
] |
new_dataset
| 0.998877 |
2206.02502
|
Giuseppe Stragapede
|
Giuseppe Stragapede, Ruben Vera-Rodriguez, Ruben Tolosana and Aythami
Morales
|
BehavePassDB: Public Database for Mobile Behavioral Biometrics and
Benchmark Evaluation
|
11 pages, 3 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Mobile behavioral biometrics have become a popular topic of research,
reaching promising results in terms of authentication, exploiting a multimodal
combination of touchscreen and background sensor data. However, there is no way
of knowing whether state-of-the-art classifiers in the literature can
distinguish between the notion of user and device. In this article, we present
a new database, BehavePassDB, structured into separate acquisition sessions and
tasks to mimic the most common aspects of mobile Human-Computer Interaction
(HCI). BehavePassDB is acquired through a dedicated mobile app installed on the
subjects' devices, also including the case of different users on the same
device for evaluation. We propose a standard experimental protocol and
benchmark for the research community to perform a fair comparison of novel
approaches with the state of the art. We propose and evaluate a system based on
Long-Short Term Memory (LSTM) architecture with triplet loss and modality
fusion at score level.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 11:21:15 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Oct 2022 11:21:34 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Stragapede",
"Giuseppe",
""
],
[
"Vera-Rodriguez",
"Ruben",
""
],
[
"Tolosana",
"Ruben",
""
],
[
"Morales",
"Aythami",
""
]
] |
new_dataset
| 0.987992 |
2207.05729
|
Chaim Baskin
|
Yaniv Nemcovsky, Matan Jacoby, Alex M. Bronstein and Chaim Baskin
|
Physical Passive Patch Adversarial Attacks on Visual Odometry Systems
|
Accepted to ACCV 2022
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Deep neural networks are known to be susceptible to adversarial perturbations
-- small perturbations that alter the output of the network and exist under
strict norm limitations. While such perturbations are usually discussed as
tailored to a specific input, a universal perturbation can be constructed to
alter the model's output on a set of inputs. Universal perturbations present a
more realistic case of adversarial attacks, as awareness of the model's exact
input is not required. In addition, the universal attack setting raises the
subject of generalization to unseen data, where given a set of inputs, the
universal perturbations aim to alter the model's output on out-of-sample data.
In this work, we study physical passive patch adversarial attacks on visual
odometry-based autonomous navigation systems. A visual odometry system aims to
infer the relative camera motion between two corresponding viewpoints, and is
frequently used by vision-based autonomous navigation systems to estimate their
state. For such navigation systems, a patch adversarial perturbation poses a
severe security issue, as it can be used to mislead a system onto some
collision course. To the best of our knowledge, we show for the first time that
the error margin of a visual odometry model can be significantly increased by
deploying patch adversarial attacks in the scene. We provide evaluation on
synthetic closed-loop drone navigation data and demonstrate that a comparable
vulnerability exists in real data. A reference implementation of the proposed
method and the reported experiments is provided at
https://github.com/patchadversarialattacks/patchadversarialattacks.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 14:41:06 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Oct 2022 06:22:32 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Nemcovsky",
"Yaniv",
""
],
[
"Jacoby",
"Matan",
""
],
[
"Bronstein",
"Alex M.",
""
],
[
"Baskin",
"Chaim",
""
]
] |
new_dataset
| 0.965452 |
2208.04083
|
Iman Bilal
|
Iman Munire Bilal, Bo Wang, Adam Tsakalidis, Dong Nguyen, Rob Procter,
Maria Liakata
|
Template-based Abstractive Microblog Opinion Summarisation
|
Accepted for publication in Transactions of the Association for
Computational Linguistics (TACL), 2022. Pre-MIT Press publication version
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the task of microblog opinion summarisation (MOS) and share a
dataset of 3100 gold-standard opinion summaries to facilitate research in this
domain. The dataset contains summaries of tweets spanning a 2-year period and
covers more topics than any other public Twitter summarisation dataset.
Summaries are abstractive in nature and have been created by journalists
skilled in summarising news articles following a template separating factual
information (main story) from author opinions. Our method differs from previous
work on generating gold-standard summaries from social media, which usually
involves selecting representative posts and thus favours extractive
summarisation models. To showcase the dataset's utility and challenges, we
benchmark a range of abstractive and extractive state-of-the-art summarisation
models and achieve good performance, with the former outperforming the latter.
We also show that fine-tuning is necessary to improve performance and
investigate the benefits of using different sample sizes.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 12:16:01 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Oct 2022 10:50:55 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Bilal",
"Iman Munire",
""
],
[
"Wang",
"Bo",
""
],
[
"Tsakalidis",
"Adam",
""
],
[
"Nguyen",
"Dong",
""
],
[
"Procter",
"Rob",
""
],
[
"Liakata",
"Maria",
""
]
] |
new_dataset
| 0.999278 |
2209.13363
|
Pu Jin
|
Pu Jin, Lichao Mou, Gui-Song Xia, Xiao Xiang Zhu
|
Anomaly Detection in Aerial Videos with Transformers
| null | null |
10.1109/TGRS.2022.3198130
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Unmanned aerial vehicles (UAVs) are widely applied for purposes of
inspection, search, and rescue operations by the virtue of low-cost,
large-coverage, real-time, and high-resolution data acquisition capacities.
Massive volumes of aerial videos are produced in these processes, in which
normal events often account for an overwhelming proportion. It is extremely
difficult to localize and extract abnormal events containing potentially
valuable information from long video streams manually. Therefore, we are
dedicated to developing anomaly detection methods to solve this issue. In this
paper, we create a new dataset, named DroneAnomaly, for anomaly detection in
aerial videos. This dataset provides 37 training video sequences and 22 testing
video sequences from 7 different realistic scenes with various anomalous
events. There are 87,488 color video frames (51,635 for training and 35,853 for
testing) with the size of $640 \times 640$ at 30 frames per second. Based on
this dataset, we evaluate existing methods and offer a benchmark for this task.
Furthermore, we present a new baseline model, ANomaly Detection with
Transformers (ANDT), which treats consecutive video frames as a sequence of
tubelets, utilizes a Transformer encoder to learn feature representations from
the sequence, and leverages a decoder to predict the next frame. Our network
models normality in the training phase and identifies an event with
unpredictable temporal dynamics as an anomaly in the test phase. Moreover, To
comprehensively evaluate the performance of our proposed method, we use not
only our Drone-Anomaly dataset but also another dataset. We will make our
dataset and code publicly available. A demo video is available at
https://youtu.be/ancczYryOBY. We make our dataset and code publicly available .
|
[
{
"version": "v1",
"created": "Sun, 25 Sep 2022 21:24:18 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Jin",
"Pu",
""
],
[
"Mou",
"Lichao",
""
],
[
"Xia",
"Gui-Song",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
] |
new_dataset
| 0.999662 |
2210.01166
|
Stanley Lewis
|
Stanley Lewis, Jana Pavlasek, Odest Chadwicke Jenkins
|
NARF22: Neural Articulated Radiance Fields for Configuration-Aware
Rendering
|
Accepted to the 2022 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS). Contact: Stanley Lewis, stanlew@umich.edu
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Articulated objects pose a unique challenge for robotic perception and
manipulation. Their increased number of degrees-of-freedom makes tasks such as
localization computationally difficult, while also making the process of
real-world dataset collection unscalable. With the aim of addressing these
scalability issues, we propose Neural Articulated Radiance Fields (NARF22), a
pipeline which uses a fully-differentiable, configuration-parameterized Neural
Radiance Field (NeRF) as a means of providing high quality renderings of
articulated objects. NARF22 requires no explicit knowledge of the object
structure at inference time. We propose a two-stage parts-based training
mechanism which allows the object rendering models to generalize well across
the configuration space even if the underlying training data has as few as one
configuration represented. We demonstrate the efficacy of NARF22 by training
configurable renderers on a real-world articulated tool dataset collected via a
Fetch mobile manipulation robot. We show the applicability of the model to
gradient-based inference methods through a configuration estimation and 6
degree-of-freedom pose refinement task. The project webpage is available at:
https://progress.eecs.umich.edu/projects/narf/.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 18:34:44 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Lewis",
"Stanley",
""
],
[
"Pavlasek",
"Jana",
""
],
[
"Jenkins",
"Odest Chadwicke",
""
]
] |
new_dataset
| 0.997633 |
2210.01205
|
Hamid Nasiri
|
Paria Ghaheri, Hamid Nasiri, Ahmadreza Shateri, Arman Homafar
|
Diagnosis of Parkinson's Disease Based on Voice Signals Using SHAP and
Hard Voting Ensemble Method
| null | null | null | null |
cs.LG eess.AS eess.SP
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Background and Objective: Parkinson's disease (PD) is the second most common
progressive neurological condition after Alzheimer's, characterized by motor
and non-motor symptoms. Developing a method to diagnose the condition in its
beginning phases is essential because of the significant number of individuals
afflicting with this illness. PD is typically identified using motor symptoms
or other Neuroimaging techniques, such as DATSCAN and SPECT. These methods are
expensive, time-consuming, and unavailable to the general public; furthermore,
they are not very accurate. These constraints encouraged us to develop a novel
technique using SHAP and Hard Voting Ensemble Method based on voice signals.
Methods: In this article, we used Pearson Correlation Coefficients to
understand the relationship between input features and the output, and finally,
input features with high correlation were selected. These selected features
were classified by the Extreme Gradient Boosting (XGBoost), Light Gradient
Boosting Machine (LightGBM), Gradient Boosting, and Bagging. Moreover, the Hard
Voting Ensemble Method was determined based on the performance of the four
classifiers. At the final stage, we proposed Shapley Additive exPlanations
(SHAP) to rank the features according to their significance in diagnosing
Parkinson's disease. Results and Conclusion: The proposed method achieved
85.42% accuracy, 84.94% F1-score, 86.77% precision, 87.62% specificity, and
83.20% sensitivity. The study's findings demonstrated that the proposed method
outperformed state-of-the-art approaches and can assist physicians in
diagnosing Parkinson's cases.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 19:45:22 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Ghaheri",
"Paria",
""
],
[
"Nasiri",
"Hamid",
""
],
[
"Shateri",
"Ahmadreza",
""
],
[
"Homafar",
"Arman",
""
]
] |
new_dataset
| 0.959638 |
2210.01330
|
Tao Yang
|
Fangtao Yu, Tao Yang, Qiuzhuo Chen
|
Doubly-Irregular Repeat-Accumulate Codes over Integer Rings for
Multi-user Communications
|
30 pages, 13 figures, submitted to IEEE Trans. Signal Processing
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Structured codes based on lattices were shown to provide enlarged capacity
for multi-user communication networks. In this paper, we study
capacity-approaching irregular repeat accumulate (IRA) codes over integer rings
$\mathbb{Z}_{2^{m}}$ for $2^m$-PAM signaling, $m=1,2,\cdots$. Such codes
feature the property that the integer sum of $K$ codewords belongs to the
extended codebook (or lattice) w.r.t. the base code. With it, \emph{%
structured binning} can be utilized and the gains promised in lattice based
network information theory can be materialized in practice. In designing IRA
ring codes, we first analyze the effect of zero-divisors of integer ring on the
iterative belief-propagation (BP) decoding, and show the invalidity of
symmetric Gaussian approximation. Then we propose a doubly IRA (D-IRA) ring
code structure, consisting of \emph{irregular multiplier distribution} and
\emph{irregular node-degree distribution}, that can restore the symmetry and
optimize the BP decoding threshold. For point-to-point AWGN channel with $% 2^m
$-PAM inputs, D-IRA ring codes perform as low as 0.29 dB to the capacity
limits, outperforming existing bit-interleaved coded-modulation (BICM) and IRA
modulation codes over GF($2^m$). We then proceed to design D-IRA ring codes for
two important multi-user communication setups, namely compute-forward (CF) and
dirty paper coding (DPC), with $2^m$-PAM signaling. With it, a physical-layer
network coding scheme yields a gap to the CF limit by 0.24 dB, and a simple
linear DPC scheme exhibits a gap to the capacity by 0.91 dB.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 02:46:07 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Yu",
"Fangtao",
""
],
[
"Yang",
"Tao",
""
],
[
"Chen",
"Qiuzhuo",
""
]
] |
new_dataset
| 0.989415 |
2210.01357
|
Ryo Suzuki
|
Mehrad Faridan, Marcus Friedel, Ryo Suzuki
|
UltraBots: Large-Area Mid-Air Haptics for VR with Robotically Actuated
Ultrasound Transducers
|
UIST 2022 SIC
| null |
10.1145/3526114.3561350
| null |
cs.HC cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce UltraBots, a system that combines ultrasound haptic feedback and
robotic actuation for large-area mid-air haptics for VR. Ultrasound haptics can
provide precise mid-air haptic feedback and versatile shape rendering, but the
interaction area is often limited by the small size of the ultrasound devices,
restricting the possible interactions for VR. To address this problem, this
paper introduces a novel approach that combines robotic actuation with
ultrasound haptics. More specifically, we will attach ultrasound transducer
arrays to tabletop mobile robots or robotic arms for scalable, extendable, and
translatable interaction areas. We plan to use Sony Toio robots for 2D
translation and/or commercially available robotic arms for 3D translation.
Using robotic actuation and hand tracking measured by a VR HMD (e.g., Oculus
Quest), our system can keep the ultrasound transducers underneath the user's
hands to provide on-demand haptics. We demonstrate applications with workspace
environments, medical training, education and entertainment.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 03:51:46 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Faridan",
"Mehrad",
""
],
[
"Friedel",
"Marcus",
""
],
[
"Suzuki",
"Ryo",
""
]
] |
new_dataset
| 0.98191 |
2210.01455
|
Thomas Tiotto
|
T. F. Tiotto, A. S. Goossens, A. E. Dima, C. Yakopcic, T. Banerjee, J.
P. Borst, N. A. Taatgen
|
A Compact Model of Interface-Type Memristors Linking Physical and Device
Properties
|
14 pages, 2 pages of Supplementary Data, 4 figures, 4 tables
| null | null | null |
cs.ET cs.AI cs.NE physics.app-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Memristors are an electronic device whose resistance depends on the voltage
history that has been applied to its two terminals. Despite its clear advantage
as a computational element, a suitable transport model is lacking for the
special class of interface-based memristors. Here, we adapt the widely-used
Yakopcic compact model by including transport equations relevant to
interface-type memristors. This model is able to reproduce the qualitative
behaviour measured upon Nb-doped SrTiO$_3$ memristive devices. Our analysis
demonstrates a direct correlation between the devices' characteristic
parameters and those of our model. The model can clearly identify the charge
transport mechanism in different resistive states thus facilitating evaluation
of the relevant parameters pertaining to resistive switching in interface-based
memristors. One clear application of our study is its ability to inform the
design and fabrication of related memristive devices.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 08:30:30 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Tiotto",
"T. F.",
""
],
[
"Goossens",
"A. S.",
""
],
[
"Dima",
"A. E.",
""
],
[
"Yakopcic",
"C.",
""
],
[
"Banerjee",
"T.",
""
],
[
"Borst",
"J. P.",
""
],
[
"Taatgen",
"N. A.",
""
]
] |
new_dataset
| 0.97469 |
2210.01485
|
Zixun Zhang
|
Yuncheng Jiang, Zixun Zhang, Shixi Qin, Yao Guo, Zhen Li, Shuguang Cui
|
APAUNet: Axis Projection Attention UNet for Small Target in 3D Medical
Segmentation
|
Accepted by ACCV2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In 3D medical image segmentation, small targets segmentation is crucial for
diagnosis but still faces challenges. In this paper, we propose the Axis
Projection Attention UNet, named APAUNet, for 3D medical image segmentation,
especially for small targets. Considering the large proportion of the
background in the 3D feature space, we introduce a projection strategy to
project the 3D features into three orthogonal 2D planes to capture the
contextual attention from different views. In this way, we can filter out the
redundant feature information and mitigate the loss of critical information for
small lesions in 3D scans. Then we utilize a dimension hybridization strategy
to fuse the 3D features with attention from different axes and merge them by a
weighted summation to adaptively learn the importance of different
perspectives. Finally, in the APA Decoder, we concatenate both high and low
resolution features in the 2D projection process, thereby obtaining more
precise multi-scale information, which is vital for small lesion segmentation.
Quantitative and qualitative experimental results on two public datasets (BTCV
and MSD) demonstrate that our proposed APAUNet outperforms the other methods.
Concretely, our APAUNet achieves an average dice score of 87.84 on BTCV, 84.48
on MSD-Liver and 69.13 on MSD-Pancreas, and significantly surpass the previous
SOTA methods on small targets.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 09:28:58 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Jiang",
"Yuncheng",
""
],
[
"Zhang",
"Zixun",
""
],
[
"Qin",
"Shixi",
""
],
[
"Guo",
"Yao",
""
],
[
"Li",
"Zhen",
""
],
[
"Cui",
"Shuguang",
""
]
] |
new_dataset
| 0.998482 |
2210.01487
|
Ayush Gupta
|
Ahmed Baza, Ayush Gupta, Ekaterina Dorzhieva, Aleksey Fedoseev,
Dzmitry Tsetserukou
|
SwarMan: Anthropomorphic Swarm of Drones Avatar with Body Tracking and
Deep Learning-Based Gesture Recognition
|
6 pages, 8 figures, IEEE SMC 2022 conference
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Anthropomorphic robot avatars present a conceptually novel approach to remote
affective communication, allowing people across the world a wider specter of
emotional and social exchanges over traditional 2D and 3D image data. However,
there are several limitations of current telepresence robots, such as the high
weight, complexity of the system that prevents its fast deployment, and the
limited workspace of the avatars mounted on either static or wheeled mobile
platforms.
In this paper, we present a novel concept of telecommunication through a
robot avatar based on an anthropomorphic swarm of drones; SwarMan. The
developed system consists of nine nanocopters controlled remotely by the
operator through a gesture recognition interface. SwarMan allows operators to
communicate by directly following their motions and by recognizing one of the
prerecorded emotional patterns, thus rendering the captured emotion as
illumination on the drones. The LSTM MediaPipe network was trained on a
collected dataset of 600 short videos with five emotional gestures. The
accuracy of achieved emotion recognition was 97% on the test dataset.
As communication through the swarm avatar significantly changes the visual
appearance of the operator, we investigated the ability of the users to
recognize and respond to emotions performed by the swarm of drones. The
experimental results revealed a high consistency between the users in rating
emotions. Additionally, users indicated low physical demand (2.25 on the Likert
scale) and were satisfied with their performance (1.38 on the Likert scale)
when communicating by the SwarMan interface.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 09:31:59 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Baza",
"Ahmed",
""
],
[
"Gupta",
"Ayush",
""
],
[
"Dorzhieva",
"Ekaterina",
""
],
[
"Fedoseev",
"Aleksey",
""
],
[
"Tsetserukou",
"Dzmitry",
""
]
] |
new_dataset
| 0.997338 |
2210.01536
|
Soohyun Park
|
Soohyun Park, Chanyoung Park, Soyi Jung, Minseok Choi, and Joongheon
Kim
|
Age-of-Information Aware Contents Caching and Distribution for Connected
Vehicles
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To support rapid and accurate autonomous driving services, road environment
information, which is difficult to obtain through vehicle sensors themselves,
is collected and utilized through communication with surrounding infrastructure
in connected vehicle networks. For this reason, we consider a scenario that
utilizes infrastructure such as road side units (RSUs) and macro base station
(MBS) in situations where caching of road environment information is required.
Due to the rapidly changed road environment, a concept which represents a
freshness of the road content, age of information (AoI), is important. Based on
the AoI value, in the connected vehicle system, it is essential to keep
appropriate content in the RSUs in advance, update it before the content is
expired, and send the content to the vehicles which want to use it. However,
too frequent content transmission for the minimum AoI leads to indiscriminate
use of network resources. Furthermore, a transmission control, that content AoI
and service delay are not properly considered adversely, affects user service.
Therefore, it is important to find an appropriate compromise. For these
reasons, the objective of this paper is about to reduce the system cost used
for content delivery through the proposed system while minimizing the content
AoI presented in MBS, RSUs and UVs. The transmission process, which is able to
be divided into two states, i.e., content caching and service, is approached
using Markov decision process (MDP) and Lyapunov optimization framework,
respectively, which guarantee optimal solutions, as verified via data-intensive
performance evaluation.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 11:40:14 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Park",
"Soohyun",
""
],
[
"Park",
"Chanyoung",
""
],
[
"Jung",
"Soyi",
""
],
[
"Choi",
"Minseok",
""
],
[
"Kim",
"Joongheon",
""
]
] |
new_dataset
| 0.985842 |
2210.01559
|
Haomiao Ni
|
Haomiao Ni, Yihao Liu, Sharon X. Huang, Yuan Xue
|
Cross-identity Video Motion Retargeting with Joint Transformation and
Synthesis
|
WACV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel dual-branch Transformation-Synthesis
network (TS-Net), for video motion retargeting. Given one subject video and one
driving video, TS-Net can produce a new plausible video with the subject
appearance of the subject video and motion pattern of the driving video. TS-Net
consists of a warp-based transformation branch and a warp-free synthesis
branch. The novel design of dual branches combines the strengths of
deformation-grid-based transformation and warp-free generation for better
identity preservation and robustness to occlusion in the synthesized videos. A
mask-aware similarity module is further introduced to the transformation branch
to reduce computational overhead. Experimental results on face and dance
datasets show that TS-Net achieves better performance in video motion
retargeting than several state-of-the-art models as well as its single-branch
variants. Our code is available at https://github.com/nihaomiao/WACV23_TSNet.
|
[
{
"version": "v1",
"created": "Sun, 2 Oct 2022 03:09:12 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Ni",
"Haomiao",
""
],
[
"Liu",
"Yihao",
""
],
[
"Huang",
"Sharon X.",
""
],
[
"Xue",
"Yuan",
""
]
] |
new_dataset
| 0.968054 |
2210.01571
|
Adrien Bardes
|
Adrien Bardes and Jean Ponce and Yann LeCun
|
VICRegL: Self-Supervised Learning of Local Visual Features
|
Accepted at NeurIPS 2022
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most recent self-supervised methods for learning image representations focus
on either producing a global feature with invariance properties, or producing a
set of local features. The former works best for classification tasks while the
latter is best for detection and segmentation tasks. This paper explores the
fundamental trade-off between learning local and global features. A new method
called VICRegL is proposed that learns good global and local features
simultaneously, yielding excellent performance on detection and segmentation
tasks while maintaining good performance on classification tasks. Concretely,
two identical branches of a standard convolutional net architecture are fed two
differently distorted versions of the same image. The VICReg criterion is
applied to pairs of global feature vectors. Simultaneously, the VICReg
criterion is applied to pairs of local feature vectors occurring before the
last pooling layer. Two local feature vectors are attracted to each other if
their l2-distance is below a threshold or if their relative locations are
consistent with a known geometric transformation between the two input images.
We demonstrate strong performance on linear classification and segmentation
transfer tasks. Code and pretrained models are publicly available at:
https://github.com/facebookresearch/VICRegL
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 12:54:25 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Bardes",
"Adrien",
""
],
[
"Ponce",
"Jean",
""
],
[
"LeCun",
"Yann",
""
]
] |
new_dataset
| 0.99976 |
2210.01613
|
Priyanka Sen
|
Priyanka Sen, Alham Fikri Aji, Amir Saffari
|
Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End
Question Answering
|
Accepted at COLING 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce Mintaka, a complex, natural, and multilingual dataset designed
for experimenting with end-to-end question-answering models. Mintaka is
composed of 20,000 question-answer pairs collected in English, annotated with
Wikidata entities, and translated into Arabic, French, German, Hindi, Italian,
Japanese, Portuguese, and Spanish for a total of 180,000 samples. Mintaka
includes 8 types of complex questions, including superlative, intersection, and
multi-hop questions, which were naturally elicited from crowd workers. We run
baselines over Mintaka, the best of which achieves 38% hits@1 in English and
31% hits@1 multilingually, showing that existing models have room for
improvement. We release Mintaka at https://github.com/amazon-research/mintaka.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 13:54:29 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Sen",
"Priyanka",
""
],
[
"Aji",
"Alham Fikri",
""
],
[
"Saffari",
"Amir",
""
]
] |
new_dataset
| 0.999863 |
2210.01645
|
Andr\'e Santos
|
Andr\'e Santos, Atabak Dehban, Jos\'e Santos-Victor
|
Robotic Learning the Sequence of Packing Irregular Objects from Human
Demonstrations
|
8 pages, 7 figures
| null | null | null |
cs.RO cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We address the unsolved task of robotic bin packing with irregular objects,
such as groceries, where the underlying constraints on object placement and
manipulation, and the diverse objects' physical properties make preprogrammed
strategies unfeasible. Our approach is to learn directly from expert
demonstrations in order to extract implicit task knowledge and strategies to
achieve an efficient space usage, safe object positioning and to generate
human-like behaviors that enhance human-robot trust. We collect and make
available a novel and diverse dataset, BoxED, of box packing demonstrations by
humans in virtual reality. In total, 263 boxes were packed with
supermarket-like objects by 43 participants, yielding 4644 object
manipulations. We use the BoxED dataset to learn a Markov chain to predict the
object packing sequence for a given set of objects and compare it with human
performance. Our experimental results show that the model surpasses human
performance by generating sequence predictions that humans classify as
human-like more frequently than human-generated sequences.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 14:44:55 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Santos",
"André",
""
],
[
"Dehban",
"Atabak",
""
],
[
"Santos-Victor",
"José",
""
]
] |
new_dataset
| 0.98591 |
2210.01647
|
Adrian Mos
|
Chuhao Wu, Jose Miguel Perez-Alvarez, Adrian Mos, John M. Carroll
|
Codeless App Development: Evaluating A Cloud-Native Domain-Specific
Functions Approach
| null | null | null | null |
cs.SE cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile applications play an important role in the economy today and there is
an increasing trend for app enablement on multiple platforms. However,
creating, distributing, and maintaining an application remain expert tasks.
Even for software developers, the process can be error-prone and
resource-consuming, especially when targeting different platforms
simultaneously. Researchers have proposed several frameworks to facilitate
cross-platform app development, but little attention has been paid to
non-technical users. In this paper, we described the Flow framework, which
takes the advantage of domain-specific languages to enable no-code
specification for app modeling. The cloud-native coordination mechanism further
supports non-technical users to execute, monitor, and maintain apps for any
target platforms. User evaluations were conducted to assess the usability and
user experience with the system. The results indicated that users can develop
apps in Flow with ease, but the prototype could be optimized to reduce learning
time and workload.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 14:48:58 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Wu",
"Chuhao",
""
],
[
"Perez-Alvarez",
"Jose Miguel",
""
],
[
"Mos",
"Adrian",
""
],
[
"Carroll",
"John M.",
""
]
] |
new_dataset
| 0.976673 |
2210.01688
|
Shashank Joshi
|
Shashank Joshi and Arhan Choudhury
|
Blockchain-Based Decentralized Knowledge Marketplace Using Active
Inference
| null | null | null | null |
cs.CR cs.AI cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
A knowledge market can be described as a type of market where there is a
consistent supply of data to satisfy the demand for information and is
responsible for the mapping of potential problem solvers with the entities
which need these solutions. It is possible to define them as value-exchange
systems in which the dynamic features of the creation and exchange of
intellectual assets serve as the fundamental drivers of the frequency, nature,
and outcomes of interactions among various stakeholders. Furthermore, the
provision of financial backing for research is an essential component in the
process of developing a knowledge market that is capable of enduring over time,
and it is also an essential driver of the progression of scientific
investigation. This paper underlines flaws associated with the conventional
knowledge-based market, including but not limited to excessive financing
concentration, ineffective information exchange, a lack of security, mapping of
entities, etc. The authors present a decentralized framework for the knowledge
marketplace incorporating technologies such as blockchain, active inference,
zero-knowledge proof, etc. The proposed decentralized framework provides not
only an efficient mapping mechanism to map entities in the marketplace but also
a more secure and controlled way to share knowledge and services among various
stakeholders.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 15:37:31 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Joshi",
"Shashank",
""
],
[
"Choudhury",
"Arhan",
""
]
] |
new_dataset
| 0.996742 |
2210.01706
|
Simon X. Yang
|
Danjie Zhu, Simon X. Yang, Mohammad Biglarbegian
|
A Fuzzy Logic-based Cascade Control without Actuator Saturation for the
Unmanned Underwater Vehicle Trajectory Tracking
| null | null |
10.1007/s10846-022-01742-w
| null |
cs.RO cs.AI cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
An intelligent control strategy is proposed to eliminate the actuator
saturation problem that exists in the trajectory tracking process of unmanned
underwater vehicles (UUV). The control strategy consists of two parts: for the
kinematic modeling part, a fuzzy logic-refined backstepping control is
developed to achieve control velocities within acceptable ranges and errors of
small fluctuations; on the basis of the velocities deducted by the improved
kinematic control, the sliding mode control (SMC) is introduced in the dynamic
modeling to obtain corresponding torques and forces that should be applied to
the vehicle body. With the control velocities computed by the kinematic model
and applied forces derived by the dynamic model, the robustness and accuracy of
the UUV trajectory without actuator saturation can be achieved.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 16:01:12 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Zhu",
"Danjie",
""
],
[
"Yang",
"Simon X.",
""
],
[
"Biglarbegian",
"Mohammad",
""
]
] |
new_dataset
| 0.976558 |
2210.01721
|
Mosam Dabhi
|
Mosam Dabhi, Chaoyang Wang, Tim Clifford, Laszlo Attila Jeni, Ian R.
Fasel, Simon Lucey
|
MBW: Multi-view Bootstrapping in the Wild
|
NeurIPS 2022 conference. Project webpage and code:
https://github.com/mosamdabhi/MBW
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Labeling articulated objects in unconstrained settings have a wide variety of
applications including entertainment, neuroscience, psychology, ethology, and
many fields of medicine. Large offline labeled datasets do not exist for all
but the most common articulated object categories (e.g., humans). Hand labeling
these landmarks within a video sequence is a laborious task. Learned landmark
detectors can help, but can be error-prone when trained from only a few
examples. Multi-camera systems that train fine-grained detectors have shown
significant promise in detecting such errors, allowing for self-supervised
solutions that only need a small percentage of the video sequence to be
hand-labeled. The approach, however, is based on calibrated cameras and rigid
geometry, making it expensive, difficult to manage, and impractical in
real-world scenarios. In this paper, we address these bottlenecks by combining
a non-rigid 3D neural prior with deep flow to obtain high-fidelity landmark
estimates from videos with only two or three uncalibrated, handheld cameras.
With just a few annotations (representing 1-2% of the frames), we are able to
produce 2D results comparable to state-of-the-art fully supervised methods,
along with 3D reconstructions that are impossible with other existing
approaches. Our Multi-view Bootstrapping in the Wild (MBW) approach
demonstrates impressive results on standard human datasets, as well as tigers,
cheetahs, fish, colobus monkeys, chimpanzees, and flamingos from videos
captured casually in a zoo. We release the codebase for MBW as well as this
challenging zoo dataset consisting image frames of tail-end distribution
categories with their corresponding 2D, 3D labels generated from minimal human
intervention.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 16:27:54 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Dabhi",
"Mosam",
""
],
[
"Wang",
"Chaoyang",
""
],
[
"Clifford",
"Tim",
""
],
[
"Jeni",
"Laszlo Attila",
""
],
[
"Fasel",
"Ian R.",
""
],
[
"Lucey",
"Simon",
""
]
] |
new_dataset
| 0.976072 |
2210.01771
|
Charith Perera
|
Hakan Kayan, Yasar Majib, Wael Alsafery, Mahmoud Barhamgi, Charith
Perera
|
AnoML-IoT: An End to End Re-configurable Multi-protocol Anomaly
Detection Pipeline for Internet of Things
|
Elsevier Internet of Things, Volume 16, 100437, December 2021
|
Elsevier Internet of Things, Volume 16, 100437, December 2021
| null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid development in ubiquitous computing has enabled the use of
microcontrollers as edge devices. These devices are used to develop truly
distributed IoT-based mechanisms where machine learning (ML) models are
utilized. However, integrating ML models to edge devices requires an
understanding of various software tools such as programming languages and
domain-specific knowledge. Anomaly detection is one of the domains where a high
level of expertise is required to achieve promising results. In this work, we
present AnoML which is an end-to-end data science pipeline that allows the
integration of multiple wireless communication protocols, anomaly detection
algorithms, deployment to the edge, fog, and cloud platforms with minimal user
interaction. We facilitate the development of IoT anomaly detection mechanisms
by reducing the barriers that are formed due to the heterogeneity of an IoT
environment. The proposed pipeline supports four main phases: (i) data
ingestion, (ii) model training, (iii) model deployment, (iv) inference and
maintaining. We evaluate the pipeline with two anomaly detection datasets while
comparing the efficiency of several machine learning algorithms within
different nodes. We also provide the source code
(https://gitlab.com/IOTGarage/anoml-iot-analytics) of the developed tools which
are the main components of the pipeline.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 17:34:25 GMT"
}
] | 2022-10-05T00:00:00 |
[
[
"Kayan",
"Hakan",
""
],
[
"Majib",
"Yasar",
""
],
[
"Alsafery",
"Wael",
""
],
[
"Barhamgi",
"Mahmoud",
""
],
[
"Perera",
"Charith",
""
]
] |
new_dataset
| 0.996708 |
1909.12943
|
Mesay Samuel
|
Mesay Samuel Gondere, Lars Schmidt-Thieme, Abiot Sinamo Boltena, Hadi
Samer Jomaa
|
Handwritten Amharic Character Recognition Using a Convolutional Neural
Network
|
ECDA2019 Conference Oral Presentation
| null |
10.5445/KSP/1000098011/09
| null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Amharic is the official language of the Federal Democratic Republic of
Ethiopia. There are lots of historic Amharic and Ethiopic handwritten documents
addressing various relevant issues including governance, science, religious,
social rules, cultures and art works which are very reach indigenous knowledge.
The Amharic language has its own alphabet derived from Ge'ez which is currently
the liturgical language in Ethiopia. Handwritten character recognition for non
Latin scripts like Amharic is not addressed especially using the advantages of
the state of the art techniques. This research work designs for the first time
a model for Amharic handwritten character recognition using a convolutional
neural network. The dataset was organized from collected sample handwritten
documents and data augmentation was applied for machine learning. The model was
further enhanced using multi-task learning from the relationships of the
characters. Promising results are observed from the later model which can
further be applied to word prediction.
|
[
{
"version": "v1",
"created": "Mon, 23 Sep 2019 21:12:22 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Gondere",
"Mesay Samuel",
""
],
[
"Schmidt-Thieme",
"Lars",
""
],
[
"Boltena",
"Abiot Sinamo",
""
],
[
"Jomaa",
"Hadi Samer",
""
]
] |
new_dataset
| 0.969958 |
2006.03243
|
Hai Shu
|
Hai Shu, Ronghua Shi, Qiran Jia, Hongtu Zhu, Ziqi Chen
|
mFI-PSO: A Flexible and Effective Method in Adversarial Image Generation
for Deep Neural Networks
|
Accepted by 2022 International Joint Conference on Neural Networks
(IJCNN)
|
2022 International Joint Conference on Neural Networks (IJCNN)
|
10.1109/IJCNN55064.2022.9892433
| null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep neural networks (DNNs) have achieved great success in image
classification, but can be very vulnerable to adversarial attacks with small
perturbations to images. To improve adversarial image generation for DNNs, we
develop a novel method, called mFI-PSO, which utilizes a Manifold-based
First-order Influence measure for vulnerable image and pixel selection and the
Particle Swarm Optimization for various objective functions. Our mFI-PSO can
thus effectively design adversarial images with flexible, customized options on
the number of perturbed pixels, the misclassification probability, and the
targeted incorrect class. Experiments demonstrate the flexibility and
effectiveness of our mFI-PSO in adversarial attacks and its appealing
advantages over some popular methods.
|
[
{
"version": "v1",
"created": "Fri, 5 Jun 2020 05:42:58 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Sep 2020 02:25:19 GMT"
},
{
"version": "v3",
"created": "Sun, 8 May 2022 23:19:04 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Shu",
"Hai",
""
],
[
"Shi",
"Ronghua",
""
],
[
"Jia",
"Qiran",
""
],
[
"Zhu",
"Hongtu",
""
],
[
"Chen",
"Ziqi",
""
]
] |
new_dataset
| 0.965129 |
2106.08267
|
Mesay Samuel
|
Mesay Samuel Gondere, Lars Schmidt-Thieme, Durga Prasad Sharma,
Randolf Scholz
|
Multi-script Handwritten Digit Recognition Using Multi-task Learning
| null | null |
10.3233/JIFS-212233
| null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Handwritten digit recognition is one of the extensively studied area in
machine learning. Apart from the wider research on handwritten digit
recognition on MNIST dataset, there are many other research works on various
script recognition. However, it is not very common for multi-script digit
recognition which encourage the development of robust and multipurpose systems.
Additionally working on multi-script digit recognition enables multi-task
learning, considering the script classification as a related task for instance.
It is evident that multi-task learning improves model performance through
inductive transfer using the information contained in related tasks. Therefore,
in this study multi-script handwritten digit recognition using multi-task
learning will be investigated. As a specific case of demonstrating the solution
to the problem, Amharic handwritten character recognition will also be
experimented. The handwritten digits of three scripts including Latin, Arabic
and Kannada are studied to show that multi-task models with reformulation of
the individual tasks have shown promising results. In this study a novel way of
using the individual tasks predictions was proposed to help classification
performance and regularize the different loss for the purpose of the main task.
This finding has outperformed the baseline and the conventional multi-task
learning models. More importantly, it avoided the need for weighting the
different losses of the tasks, which is one of the challenges in multi-task
learning.
|
[
{
"version": "v1",
"created": "Tue, 15 Jun 2021 16:30:37 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Gondere",
"Mesay Samuel",
""
],
[
"Schmidt-Thieme",
"Lars",
""
],
[
"Sharma",
"Durga Prasad",
""
],
[
"Scholz",
"Randolf",
""
]
] |
new_dataset
| 0.966359 |
2107.02238
|
Xuan Hu
|
Pranav O. Mathews and Christian B. Duffee and Abel Thayil and Ty E.
Stovall and Christopher H. Bennett and Felipe Garcia-Sanchez and Matthew J.
Marinella and Jean Anne C. Incorvia and Naimul Hassan and Xuan Hu and Joseph
S. Friedman
|
High-Speed CMOS-Free Purely Spintronic Asynchronous Recurrent Neural
Network
| null | null | null | null |
cs.NE cond-mat.dis-nn eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Neuromorphic computing systems overcome the limitations of traditional von
Neumann computing architectures. These computing systems can be further
improved upon by using emerging technologies that are more efficient than CMOS
for neural computation. Recent research has demonstrated memristors and
spintronic devices in various neural network designs boost efficiency and
speed. This paper presents a biologically inspired fully spintronic neuron used
in a fully spintronic Hopfield RNN. The network is used to solve tasks, and the
results are compared against those of current Hopfield neuromorphic
architectures which use emerging technologies.
|
[
{
"version": "v1",
"created": "Mon, 5 Jul 2021 19:23:33 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Oct 2022 00:15:02 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Mathews",
"Pranav O.",
""
],
[
"Duffee",
"Christian B.",
""
],
[
"Thayil",
"Abel",
""
],
[
"Stovall",
"Ty E.",
""
],
[
"Bennett",
"Christopher H.",
""
],
[
"Garcia-Sanchez",
"Felipe",
""
],
[
"Marinella",
"Matthew J.",
""
],
[
"Incorvia",
"Jean Anne C.",
""
],
[
"Hassan",
"Naimul",
""
],
[
"Hu",
"Xuan",
""
],
[
"Friedman",
"Joseph S.",
""
]
] |
new_dataset
| 0.99824 |
2112.04744
|
Jun Wang
|
Jun Wang, Zhoujing Li, Yixuan Qiao, Qiming Qin, Peng Gao, Guotong Xie
|
Superpixel-Based Building Damage Detection from Post-earthquake Very
High Resolution Imagery Using Deep Neural Networks
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Building damage detection after natural disasters like earthquakes is crucial
for initiating effective emergency response actions. Remotely sensed very high
spatial resolution (VHR) imagery can provide vital information due to their
ability to map the affected buildings with high geometric precision. Many
approaches have been developed to detect damaged buildings due to earthquakes.
However, little attention has been paid to exploiting rich features represented
in VHR images using Deep Neural Networks (DNN). This paper presents a novel
superpixel based approach combining DNN and a modified segmentation method, to
detect damaged buildings from VHR imagery. Firstly, a modified Fast Scanning
and Adaptive Merging method is extended to create initial over-segmentation.
Secondly, the segments are merged based on the Region Adjacent Graph (RAG),
considered an improved semantic similarity criterion composed of Local Binary
Patterns (LBP) texture, spectral, and shape features. Thirdly, a pre-trained
DNN using Stacked Denoising Auto-Encoders called SDAE-DNN is presented, to
exploit the rich semantic features for building damage detection. Deep-layer
feature abstraction of SDAE-DNN could boost detection accuracy through learning
more intrinsic and discriminative features, which outperformed other methods
using state-of-the-art alternative classifiers. We demonstrate the feasibility
and effectiveness of our method using a subset of WorldView-2 imagery, in the
complex urban areas of Bhaktapur, Nepal, which was affected by the Nepal
Earthquake of April 25, 2015.
|
[
{
"version": "v1",
"created": "Thu, 9 Dec 2021 08:05:02 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Dec 2021 03:00:26 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Dec 2021 01:34:13 GMT"
},
{
"version": "v4",
"created": "Sat, 1 Oct 2022 02:40:46 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Wang",
"Jun",
""
],
[
"Li",
"Zhoujing",
""
],
[
"Qiao",
"Yixuan",
""
],
[
"Qin",
"Qiming",
""
],
[
"Gao",
"Peng",
""
],
[
"Xie",
"Guotong",
""
]
] |
new_dataset
| 0.998736 |
2203.08041
|
Samuel Joutard
|
Samuel Joutard, Thomas Pheiffer, Chloe Audigier, Patrick Wohlfahrt,
Reuben Dorent, Sebastien Piat, Tom Vercauteren, Marc Modat, Tommaso Mansi
|
A multi-organ point cloud registration algorithm for abdominal CT
registration
|
Accepted at WBIR 2022
| null |
10.1007/978-3-031-11203-4_9
| null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Registering CT images of the chest is a crucial step for several tasks such
as disease progression tracking or surgical planning. It is also a challenging
step because of the heterogeneous content of the human abdomen which implies
complex deformations. In this work, we focus on accurately registering a subset
of organs of interest. We register organ surface point clouds, as may typically
be extracted from an automatic segmentation pipeline, by expanding the Bayesian
Coherent Point Drift algorithm (BCPD). We introduce MO-BCPD, a multi-organ
version of the BCPD algorithm which explicitly models three important aspects
of this task: organ individual elastic properties, inter-organ motion coherence
and segmentation inaccuracy. This model also provides an interpolation
framework to estimate the deformation of the entire volume. We demonstrate the
efficiency of our method by registering different patients from the LITS
challenge dataset. The target registration error on anatomical landmarks is
almost twice as small for MO-BCPD compared to standard BCPD while imposing the
same constraints on individual organs deformation.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 16:27:29 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Joutard",
"Samuel",
""
],
[
"Pheiffer",
"Thomas",
""
],
[
"Audigier",
"Chloe",
""
],
[
"Wohlfahrt",
"Patrick",
""
],
[
"Dorent",
"Reuben",
""
],
[
"Piat",
"Sebastien",
""
],
[
"Vercauteren",
"Tom",
""
],
[
"Modat",
"Marc",
""
],
[
"Mansi",
"Tommaso",
""
]
] |
new_dataset
| 0.970488 |
2203.08423
|
Kerry He
|
Kerry He, Pradeepsundar Simini, Wesley Chan, Dana Kuli\'c, Elizabeth
Croft, Akansel Cosgun
|
On-The-Go Robot-to-Human Handovers with a Mobile Manipulator
|
6 pages, 7 figures, 2 tables, submitted to RO-MAN 2022
| null |
10.1109/RO-MAN53752.2022.9900642
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing approaches to direct robot-to-human handovers are typically
implemented on fixed-base robot arms, or on mobile manipulators that come to a
full stop before performing the handover. We propose "on-the-go" handovers
which permit a moving mobile manipulator to hand over an object to a human
without stopping. The on-the-go handover motion is generated with a reactive
controller that allows simultaneous control of the base and the arm. In a user
study, human receivers subjectively assessed on-the-go handovers to be more
efficient, predictable, natural, better timed and safer than handovers that
implemented a "stop-and-deliver" behavior.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 06:54:53 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"He",
"Kerry",
""
],
[
"Simini",
"Pradeepsundar",
""
],
[
"Chan",
"Wesley",
""
],
[
"Kulić",
"Dana",
""
],
[
"Croft",
"Elizabeth",
""
],
[
"Cosgun",
"Akansel",
""
]
] |
new_dataset
| 0.970908 |
2204.02849
|
Shelly Sheynin
|
Shelly Sheynin, Oron Ashual, Adam Polyak, Uriel Singer, Oran Gafni,
Eliya Nachmani, Yaniv Taigman
|
KNN-Diffusion: Image Generation via Large-Scale Retrieval
| null | null | null | null |
cs.CV cs.AI cs.CL cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent text-to-image models have achieved impressive results. However, since
they require large-scale datasets of text-image pairs, it is impractical to
train them on new domains where data is scarce or not labeled. In this work, we
propose using large-scale retrieval methods, in particular, efficient
k-Nearest-Neighbors (kNN), which offers novel capabilities: (1) training a
substantially small and efficient text-to-image diffusion model without any
text, (2) generating out-of-distribution images by simply swapping the
retrieval database at inference time, and (3) performing text-driven local
semantic manipulations while preserving object identity. To demonstrate the
robustness of our method, we apply our kNN approach on two state-of-the-art
diffusion backbones, and show results on several different datasets. As
evaluated by human studies and automatic metrics, our method achieves
state-of-the-art results compared to existing approaches that train
text-to-image generation models using images only (without paired text data)
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 14:13:35 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Oct 2022 11:55:59 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Sheynin",
"Shelly",
""
],
[
"Ashual",
"Oron",
""
],
[
"Polyak",
"Adam",
""
],
[
"Singer",
"Uriel",
""
],
[
"Gafni",
"Oran",
""
],
[
"Nachmani",
"Eliya",
""
],
[
"Taigman",
"Yaniv",
""
]
] |
new_dataset
| 0.963581 |
2205.13524
|
Binbin Huang
|
Binbin Huang, Xinhao Yan, Anpei Chen, Shenghua Gao, Jingyi Yu
|
PREF: Phasorial Embedding Fields for Compact Neural Representations
| null | null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present an efficient frequency-based neural representation termed PREF: a
shallow MLP augmented with a phasor volume that covers significant border
spectra than previous Fourier feature mapping or Positional Encoding. At the
core is our compact 3D phasor volume where frequencies distribute uniformly
along a 2D plane and dilate along a 1D axis. To this end, we develop a tailored
and efficient Fourier transform that combines both Fast Fourier transform and
local interpolation to accelerate na\"ive Fourier mapping. We also introduce a
Parsvel regularizer that stables frequency-based learning. In these ways, Our
PREF reduces the costly MLP in the frequency-based representation, thereby
significantly closing the efficiency gap between it and other hybrid
representations, and improving its interpretability. Comprehensive experiments
demonstrate that our PREF is able to capture high-frequency details while
remaining compact and robust, including 2D image generalization, 3D signed
distance function regression and 5D neural radiance field reconstruction.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 17:43:03 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Aug 2022 05:57:18 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Oct 2022 11:28:10 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Huang",
"Binbin",
""
],
[
"Yan",
"Xinhao",
""
],
[
"Chen",
"Anpei",
""
],
[
"Gao",
"Shenghua",
""
],
[
"Yu",
"Jingyi",
""
]
] |
new_dataset
| 0.999294 |
2206.03480
|
R. Kenny Jones
|
R. Kenny Jones and Aalia Habib and Daniel Ritchie
|
SHRED: 3D Shape Region Decomposition with Learned Local Operations
|
SIGGRAPH ASIA 2022
| null | null | null |
cs.CV cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present SHRED, a method for 3D SHape REgion Decomposition. SHRED takes a
3D point cloud as input and uses learned local operations to produce a
segmentation that approximates fine-grained part instances. We endow SHRED with
three decomposition operations: splitting regions, fixing the boundaries
between regions, and merging regions together. Modules are trained
independently and locally, allowing SHRED to generate high-quality
segmentations for categories not seen during training. We train and evaluate
SHRED with fine-grained segmentations from PartNet; using its merge-threshold
hyperparameter, we show that SHRED produces segmentations that better respect
ground-truth annotations compared with baseline methods, at any desired
decomposition granularity. Finally, we demonstrate that SHRED is useful for
downstream applications, out-performing all baselines on zero-shot fine-grained
part instance segmentation and few-shot fine-grained semantic segmentation when
combined with methods that learn to label shape regions.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 17:55:15 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Oct 2022 14:54:25 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Jones",
"R. Kenny",
""
],
[
"Habib",
"Aalia",
""
],
[
"Ritchie",
"Daniel",
""
]
] |
new_dataset
| 0.99713 |
2206.07666
|
Jan Lehe\v{c}ka
|
Jan Lehe\v{c}ka, Josef V. Psutka, Josef Psutka
|
Transformer-based Automatic Speech Recognition of Formal and Colloquial
Czech in MALACH Project
|
to be published in Proceedings of TSD 2022
|
TSD 2022. Lecture Notes in Computer Science, vol 13502. Springer,
Cham
|
10.1007/978-3-031-16270-1_25
| null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Czech is a very specific language due to its large differences between the
formal and the colloquial form of speech. While the formal (written) form is
used mainly in official documents, literature, and public speeches, the
colloquial (spoken) form is used widely among people in casual speeches. This
gap introduces serious problems for ASR systems, especially when training or
evaluating ASR models on datasets containing a lot of colloquial speech, such
as the MALACH project. In this paper, we are addressing this problem in the
light of a new paradigm in end-to-end ASR systems -- recently introduced
self-supervised audio Transformers. Specifically, we are investigating the
influence of colloquial speech on the performance of Wav2Vec 2.0 models and
their ability to transcribe colloquial speech directly into formal transcripts.
We are presenting results with both formal and colloquial forms in the training
transcripts, language models, and evaluation transcripts.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 17:01:20 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Lehečka",
"Jan",
""
],
[
"Psutka",
"Josef V.",
""
],
[
"Psutka",
"Josef",
""
]
] |
new_dataset
| 0.994237 |
2206.15429
|
Shengzhe Hou
|
Shengzhe Hou, Xinming Lu, Wenli Gao, Shuai Jiang, Xingli Zhang
|
Interactive Physically-Based Simulation of Roadheader Robot
| null | null |
10.1007/s13369-022-07335-x
| null |
cs.RO cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Roadheader is an engineering robot widely used in underground engineering and
mining industry. Interactive dynamics simulation of roadheader is a fundamental
problem in unmanned excavation and virtual reality training. However, current
research is only based on traditional animation techniques or commercial game
engines. There are few studies that apply real-time physical simulation of
computer graphics to the field of roadheader robot. This paper aims to present
an interactive physically-based simulation system of roadheader robot. To this
end, an improved multibody simulation method based on generalized coordinates
is proposed. First, our simulation method describes robot dynamics based on
generalized coordinates. Compared to state-of-the-art methods, our method is
more stable and accurate. Numerical simulation results showed that our method
has significantly less error than the game engine in the same number of
iterations. Second, we adopt the symplectic Euler integrator instead of the
conventional fourth-order Runge-Kutta (RK4) method for dynamics iteration.
Compared with other integrators, our method is more stable in energy drift
during long-term simulation. The test results showed that our system achieved
real-time interaction performance of 60 frames per second (fps). Furthermore,
we propose a model format for geometric and robotics modeling of roadheaders to
implement the system. Our interactive simulation system of roadheader meets the
requirements of interactivity, accuracy and stability.
|
[
{
"version": "v1",
"created": "Wed, 29 Jun 2022 14:33:50 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Hou",
"Shengzhe",
""
],
[
"Lu",
"Xinming",
""
],
[
"Gao",
"Wenli",
""
],
[
"Jiang",
"Shuai",
""
],
[
"Zhang",
"Xingli",
""
]
] |
new_dataset
| 0.979115 |
2207.02958
|
Peng Yin
|
Shiqi Zhao, Peng Yin, Ge Yi, and Sebastian Scherer
|
SphereVLAD++: Attention-based and Signal-enhanced Viewpoint Invariant
Descriptor
|
8 pages, 7 figures, IEEE Robotics and Automation Letters
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR-based localization approach is a fundamental module for large-scale
navigation tasks, such as last-mile delivery and autonomous driving, and
localization robustness highly relies on viewpoints and 3D feature extraction.
Our previous work provides a viewpoint-invariant descriptor to deal with
viewpoint differences; however, the global descriptor suffers from a low
signal-noise ratio in unsupervised clustering, reducing the distinguishable
feature extraction ability. We develop SphereVLAD++, an attention-enhanced
viewpoint invariant place recognition method in this work. SphereVLAD++
projects the point cloud on the spherical perspective for each unique area and
captures the contextual connections between local features and their
dependencies with global 3D geometry distribution. In return, clustered
elements within the global descriptor are conditioned on local and global
geometries and support the original viewpoint-invariant property of SphereVLAD.
In the experiments, we evaluated the localization performance of SphereVLAD++
on both public KITTI360 datasets and self-generated datasets from the city of
Pittsburgh. The experiment results show that SphereVLAD++ outperforms all
relative state-of-the-art 3D place recognition methods under small or even
totally reversed viewpoint differences and shows 0.69% and 15.81% successful
retrieval rates with better than the second best. Low computation requirements
and high time efficiency also help its application for low-cost robots.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 20:32:43 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Oct 2022 07:28:40 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Zhao",
"Shiqi",
""
],
[
"Yin",
"Peng",
""
],
[
"Yi",
"Ge",
""
],
[
"Scherer",
"Sebastian",
""
]
] |
new_dataset
| 0.998861 |
2207.10035
|
Lue Fan
|
Lue Fan, Feng Wang, Naiyan Wang, Zhaoxiang Zhang
|
Fully Sparse 3D Object Detection
|
NeurIPS 2022
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the perception range of LiDAR increases, LiDAR-based 3D object detection
becomes a dominant task in the long-range perception task of autonomous
driving. The mainstream 3D object detectors usually build dense feature maps in
the network backbone and prediction head. However, the computational and
spatial costs on the dense feature map are quadratic to the perception range,
which makes them hardly scale up to the long-range setting. To enable efficient
long-range LiDAR-based object detection, we build a fully sparse 3D object
detector (FSD). The computational and spatial cost of FSD is roughly linear to
the number of points and independent of the perception range. FSD is built upon
the general sparse voxel encoder and a novel sparse instance recognition (SIR)
module. SIR first groups the points into instances and then applies
instance-wise feature extraction and prediction. In this way, SIR resolves the
issue of center feature missing, which hinders the design of the fully sparse
architecture for all center-based or anchor-based detectors. Moreover, SIR
avoids the time-consuming neighbor queries in previous point-based methods by
grouping points into instances. We conduct extensive experiments on the
large-scale Waymo Open Dataset to reveal the working mechanism of FSD, and
state-of-the-art performance is reported. To demonstrate the superiority of FSD
in long-range detection, we also conduct experiments on Argoverse 2 Dataset,
which has a much larger perception range ($200m$) than Waymo Open Dataset
($75m$). On such a large perception range, FSD achieves state-of-the-art
performance and is 2.4$\times$ faster than the dense counterpart. Codes will be
released at https://github.com/TuSimple/SST.
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2022 17:01:33 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Oct 2022 15:31:24 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Fan",
"Lue",
""
],
[
"Wang",
"Feng",
""
],
[
"Wang",
"Naiyan",
""
],
[
"Zhang",
"Zhaoxiang",
""
]
] |
new_dataset
| 0.959807 |
2207.11744
|
Ruhao Wan
|
Ruhao Wan, Yang Li, Shixin Zhu
|
New MDS self-dual codes over finite fields $\F_{r^2}$
|
16 pages, 3 table
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
MDS self-dual codes have nice algebraic structures and are uniquely
determined by lengths. Recently, the construction of MDS self-dual codes of new
lengths has become an important and hot issue in coding theory. In this paper,
we develop the existing theory and construct six new classes of MDS self-dual
codes. Together with our constructions, the proportion of all known MDS
self-dual codes relative to possible MDS self-dual codes generally exceed 57\%.
As far as we know, this is the largest known ratio. Moreover, some new families
of MDS self-orthogonal codes and MDS almost self-dual codes are also
constructed.
|
[
{
"version": "v1",
"created": "Sun, 24 Jul 2022 13:44:24 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Sep 2022 13:58:57 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Oct 2022 12:01:55 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Wan",
"Ruhao",
""
],
[
"Li",
"Yang",
""
],
[
"Zhu",
"Shixin",
""
]
] |
new_dataset
| 0.994872 |
2207.12559
|
Peyton Chandarana
|
MohammadReza Mohammadi, Peyton Chandarana, James Seekings, Sara
Hendrix, Ramtin Zand
|
Static Hand Gesture Recognition for American Sign Language using
Neuromorphic Hardware
|
Authors MohammedReza Mohammadi, and Peyton Chandarana contributed
equally
| null |
10.1088/2634-4386/ac94f3
| null |
cs.LG cs.AI cs.CV cs.HC cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we develop four spiking neural network (SNN) models for two
static American Sign Language (ASL) hand gesture classification tasks, i.e.,
the ASL Alphabet and ASL Digits. The SNN models are deployed on Intel's
neuromorphic platform, Loihi, and then compared against equivalent deep neural
network (DNN) models deployed on an edge computing device, the Intel Neural
Compute Stick 2 (NCS2). We perform a comprehensive comparison between the two
systems in terms of accuracy, latency, power consumption, and energy. The best
DNN model achieves an accuracy of 99.93% on the ASL Alphabet dataset, whereas
the best performing SNN model has an accuracy of 99.30%. For the ASL-Digits
dataset, the best DNN model achieves an accuracy of 99.76% accuracy while the
SNN achieves 99.03%. Moreover, our obtained experimental results show that the
Loihi neuromorphic hardware implementations achieve up to 20.64x and 4.10x
reduction in power consumption and energy, respectively, when compared to NCS2.
|
[
{
"version": "v1",
"created": "Mon, 25 Jul 2022 22:28:04 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Sep 2022 21:22:42 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Oct 2022 01:01:26 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Mohammadi",
"MohammadReza",
""
],
[
"Chandarana",
"Peyton",
""
],
[
"Seekings",
"James",
""
],
[
"Hendrix",
"Sara",
""
],
[
"Zand",
"Ramtin",
""
]
] |
new_dataset
| 0.99071 |
2209.00797
|
Zhengxiang Wang
|
Zhengxiang Wang
|
Random Text Perturbations Work, but not Always
|
7 pages; 8 tables; 3 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present three large-scale experiments on binary text matching
classification task both in Chinese and English to evaluate the effectiveness
and generalizability of random text perturbations as a data augmentation
approach for NLP. It is found that the augmentation can bring both negative and
positive effects to the test set performance of three neural classification
models, depending on whether the models train on enough original training
examples. This remains true no matter whether five random text editing
operations, used to augment text, are applied together or separately. Our study
demonstrates with strong implication that the effectiveness of random text
perturbations is task specific and not generally positive.
|
[
{
"version": "v1",
"created": "Fri, 2 Sep 2022 03:03:51 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Oct 2022 20:39:44 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Wang",
"Zhengxiang",
""
]
] |
new_dataset
| 0.972912 |
2209.09124
|
Payam Nikdel
|
Payam Nikdel, Mohammad Mahdavian, Mo Chen
|
DMMGAN: Diverse Multi Motion Prediction of 3D Human Joints using
Attention-Based Generative Adverserial Network
| null | null | null | null |
cs.CV cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human motion prediction is a fundamental part of many human-robot
applications. Despite the recent progress in human motion prediction, most
studies simplify the problem by predicting the human motion relative to a fixed
joint and/or only limit their model to predict one possible future motion.
While due to the complex nature of human motion, a single output cannot reflect
all the possible actions one can do. Also, for any robotics application, we
need the full human motion including the user trajectory not a 3d pose relative
to the hip joint.
In this paper, we try to address these two issues by proposing a
transformer-based generative model for forecasting multiple diverse human
motions. Our model generates \textit{N} future possible motion by querying a
history of human motion. Our model first predicts the pose of the body relative
to the hip joint. Then the \textit{Hip Prediction Module} predicts the
trajectory of the hip movement for each predicted pose frame. To emphasize on
the diverse future motions we introduce a similarity loss that penalizes the
pairwise sample distance. We show that our system outperforms the
state-of-the-art in human motion prediction while it can predict diverse
multi-motion future trajectories with hip movements
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 23:22:33 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Oct 2022 23:19:32 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Nikdel",
"Payam",
""
],
[
"Mahdavian",
"Mohammad",
""
],
[
"Chen",
"Mo",
""
]
] |
new_dataset
| 0.987738 |
2209.12354
|
Yinghao Huang
|
Yinghao Huang (1), Omid Tehari (1), Michael J. Black (1), Dimitrios
Tzionas (2) ((1) Max Planck Institute for Intelligent Systems, T\"ubingen,
Germany, (2) University of Amsterdam, Amsterdam, The Netherlands)
|
InterCap: Joint Markerless 3D Tracking of Humans and Objects in
Interaction
|
To appear at GCPR2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans constantly interact with daily objects to accomplish tasks. To
understand such interactions, computers need to reconstruct these from cameras
observing whole-body interaction with scenes. This is challenging due to
occlusion between the body and objects, motion blur, depth/scale ambiguities,
and the low image resolution of hands and graspable object parts. To make the
problem tractable, the community focuses either on interacting hands, ignoring
the body, or on interacting bodies, ignoring hands. The GRAB dataset addresses
dexterous whole-body interaction but uses marker-based MoCap and lacks images,
while BEHAVE captures video of body object interaction but lacks hand detail.
We address the limitations of prior work with InterCap, a novel method that
reconstructs interacting whole-bodies and objects from multi-view RGB-D data,
using the parametric whole-body model SMPL-X and known object meshes. To tackle
the above challenges, InterCap uses two key observations: (i) Contact between
the hand and object can be used to improve the pose estimation of both. (ii)
Azure Kinect sensors allow us to set up a simple multi-view RGB-D capture
system that minimizes the effect of occlusion while providing reasonable
inter-camera synchronization. With this method we capture the InterCap dataset,
which contains 10 subjects (5 males and 5 females) interacting with 10 objects
of various sizes and affordances, including contact with the hands or feet. In
total, InterCap has 223 RGB-D videos, resulting in 67,357 multi-view frames,
each containing 6 RGB-D images. Our method provides pseudo ground-truth body
meshes and objects for each video frame. Our InterCap method and dataset fill
an important gap in the literature and support many research directions. Our
data and code are areavailable for research purposes.
|
[
{
"version": "v1",
"created": "Mon, 26 Sep 2022 00:46:49 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Oct 2022 21:41:29 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Huang",
"Yinghao",
""
],
[
"Tehari",
"Omid",
""
],
[
"Black",
"Michael J.",
""
],
[
"Tzionas",
"Dimitrios",
""
]
] |
new_dataset
| 0.994619 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.