id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1708.06268
|
Carlos Mosquera
|
Roberto L\'opez-Valcarce, Carlos Mosquera
|
Partial-Duplex Amplify-and-Forward Relaying: Spectral Efficiency
Analysis under Self-Interference
|
Submitted to IEEE Transactions on Wireless Communications
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel mode of operation for Amplify-and-Forward relays in which
the spectra of the relay input and output signals partially overlap. This
partial-duplex relaying mode encompasses half-duplex and full-duplex as
particular cases. By viewing the partial-duplex relay as a bandwidth-preserving
Linear Periodic Time-Varying system, an analysis of the spectral efficiency in
the presence of self-interference is developed. In contrast with previous
works, self-interference is regarded as a useful information-bearing component
rather than simply assimilated to noise. This approach reveals that previous
results regarding the impact of self-interference on (full-duplex) relay
performance are overly pessimistic. Based on a frequency-domain interpretation
of the effect of self-interference, a number of suboptimal decoding
architectures at the destination node are also discussed. It is found that the
partial-duplex relaying mode may provide an attractive tradeoff between
spectral efficiency and receiver complexity.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2017 14:57:48 GMT"
}
] | 2017-08-22T00:00:00 |
[
[
"López-Valcarce",
"Roberto",
""
],
[
"Mosquera",
"Carlos",
""
]
] |
new_dataset
| 0.985507 |
1708.06276
|
Alessandro Saffiotti
|
Barbara Bruno, Nak Young Chong, Hiroko Kamide, Sanjeev Kanoria,
Jaeryoung Lee, Yuto Lim, Amit Kumar Pandey, Chris Papadopoulos, Irena
Papadopoulos, Federico Pecora, Alessandro Saffiotti, Antonio Sgorbissa
|
The CARESSES EU-Japan project: making assistive robots culturally
competent
|
Paper presented at: Ambient Assisted Living, Italian Forum. Genova,
Italy, June 12--15, 2017
| null | null | null |
cs.RO cs.AI cs.CY cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The nursing literature shows that cultural competence is an important
requirement for effective healthcare. We claim that personal assistive robots
should likewise be culturally competent, that is, they should be aware of
general cultural characteristics and of the different forms they take in
different individuals, and take these into account while perceiving, reasoning,
and acting. The CARESSES project is an Europe-Japan collaborative effort that
aims at designing, developing and evaluating culturally competent assistive
robots. These robots will be able to adapt the way they behave, speak and
interact to the cultural identity of the person they assist. This paper
describes the approach taken in the CARESSES project, its initial steps, and
its future plans.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2017 15:12:54 GMT"
}
] | 2017-08-22T00:00:00 |
[
[
"Bruno",
"Barbara",
""
],
[
"Chong",
"Nak Young",
""
],
[
"Kamide",
"Hiroko",
""
],
[
"Kanoria",
"Sanjeev",
""
],
[
"Lee",
"Jaeryoung",
""
],
[
"Lim",
"Yuto",
""
],
[
"Pandey",
"Amit Kumar",
""
],
[
"Papadopoulos",
"Chris",
""
],
[
"Papadopoulos",
"Irena",
""
],
[
"Pecora",
"Federico",
""
],
[
"Saffiotti",
"Alessandro",
""
],
[
"Sgorbissa",
"Antonio",
""
]
] |
new_dataset
| 0.979798 |
1708.06308
|
Qiang Xu
|
Qiang Xu, Rong Zheng, Ezzeldin Tahoun
|
Detecting Location Fraud in Indoor Mobile Crowdsensing
|
6 pages
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile crowdsensing allows a large number of mobile devices to measure
phenomena of common interests and form a body of knowledge about natural and
social environments. In order to get location annotations for indoor mobile
crowdsensing, reference tags are usually deployed which are susceptible to
tampering and compromises by attackers. In this work, we consider three types
of location-related attacks including tag forgery, tag misplacement, and tag
removal. Different detection algorithms are proposed to deal with these
attacks. First, we introduce location-dependent fingerprints as supplementary
information for better location identification. A truth discovery algorithm is
then proposed to detect falsified data. Moreover, visiting patterns are
utilized for the detection of tag misplacement and removal. Experiments on both
crowdsensed and emulated dataset show that the proposed algorithms can detect
all three types of attacks with high accuracy.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2017 16:09:05 GMT"
}
] | 2017-08-22T00:00:00 |
[
[
"Xu",
"Qiang",
""
],
[
"Zheng",
"Rong",
""
],
[
"Tahoun",
"Ezzeldin",
""
]
] |
new_dataset
| 0.984304 |
1708.06312
|
Linda Anticoli
|
Linda Anticoli and Carla Piazza and Leonardo Taglialegne and Paolo
Zuliani
|
Verifying Quantum Programs: From Quipper to QPMC
|
Long version
| null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we present a translation from the quantum programming language
Quipper to the QPMC model checker, with the main aim of verifying Quipper
programs. Quipper is an embedded functional programming language for quantum
computation. It is above all a circuit description language, for this reason it
uses the vector state formalism and its main purpose is to make circuit
implementation easy providing high level operations for circuit manipulation.
Quipper provides both an high-level circuit building interface and a simulator.
QPMC is a model checker for quantum protocols based on the density matrix
formalism. QPMC extends the probabilistic model checker IscasMC allowing to
formally verify properties specified in the temporal logic QCTL on Quantum
Markov Chains. We implemented and tested our translation on several quantum
algorithms, including Grover's quantum search.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2017 16:26:23 GMT"
}
] | 2017-08-22T00:00:00 |
[
[
"Anticoli",
"Linda",
""
],
[
"Piazza",
"Carla",
""
],
[
"Taglialegne",
"Leonardo",
""
],
[
"Zuliani",
"Paolo",
""
]
] |
new_dataset
| 0.996947 |
1708.05417
|
Raja Naeem Akram
|
Collins Mtita, Maryline Laurent, Damien Sauveron, Raja Naeem Akram,
Konstantinos Markantonakis and Serge Chaumette
|
Serverless Protocols for Inventory and Tracking with a UAV
|
11 pages, Conference, The 36th IEEE/AIAA Digital Avionics Systems
Conference (DASC'17)
| null | null | null |
cs.CR cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is widely acknowledged that the proliferation of Unmanned Aerial Vehicles
(UAVs) may lead to serious concerns regarding avionics safety, particularly
when end-users are not adhering to air safety regulations. There are, however,
domains in which UAVs may help to increase the safety of airplanes and the
management of flights and airport resources that often require substantial
human resources. For instance, Paris Charles de Gaulle airport (CDG) has more
than 7,000 staff and supports 30,000 direct jobs for more than 60 million
passengers per year (as of 2016). Indeed, these new systems can be used
beneficially for several purposes, even in sensitive areas like airports. Among
the considered applications are those that suggest using UAVs to enhance safety
of on-ground airplanes; for instance, by collecting (once the aircraft has
landed) data recorded by different systems during the flight (like the sensors
of the Aircraft Data Networks - ADN) or by examining the state of airplane
structure. In this paper, our proposal is to use UAVs, under the control of the
airport authorities, to inventory and track various tagged assets, such as
luggage, supplies required for the flights, and maintenance tools. The aim of
our proposal is to make airport management systems more efficient for
operations requiring inventory and tracking, along with increasing safety
(sensitive assets such as refueling tanks, or sensitive pieces of luggage can
be tracked), thus raising financial profit.
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2017 19:33:10 GMT"
}
] | 2017-08-21T00:00:00 |
[
[
"Mtita",
"Collins",
""
],
[
"Laurent",
"Maryline",
""
],
[
"Sauveron",
"Damien",
""
],
[
"Akram",
"Raja Naeem",
""
],
[
"Markantonakis",
"Konstantinos",
""
],
[
"Chaumette",
"Serge",
""
]
] |
new_dataset
| 0.983448 |
1708.05514
|
Weimin Wang
|
Weimin Wang, Ken Sakurada, Nobuo Kawaguchi
|
Reflectance Intensity Assisted Automatic and Accurate Extrinsic
Calibration of 3D LiDAR and Panoramic Camera Using a Printed Chessboard
|
20 pages, submitted to the journal of Remote Sensing
|
Remote Sensing, 9(8):851 (2017)
|
10.3390/rs9080851
| null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel method for fully automatic and convenient
extrinsic calibration of a 3D LiDAR and a panoramic camera with a normally
printed chessboard. The proposed method is based on the 3D corner estimation of
the chessboard from the sparse point cloud generated by one frame scan of the
LiDAR. To estimate the corners, we formulate a full-scale model of the
chessboard and fit it to the segmented 3D points of the chessboard. The model
is fitted by optimizing the cost function under constraints of correlation
between the reflectance intensity of laser and the color of the chessboard's
patterns. Powell's method is introduced for resolving the discontinuity problem
in optimization. The corners of the fitted model are considered as the 3D
corners of the chessboard. Once the corners of the chessboard in the 3D point
cloud are estimated, the extrinsic calibration of the two sensors is converted
to a 3D-2D matching problem. The corresponding 3D-2D points are used to
calculate the absolute pose of the two sensors with Unified Perspective-n-Point
(UPnP). Further, the calculated parameters are regarded as initial values and
are refined using the Levenberg-Marquardt method. The performance of the
proposed corner detection method from the 3D point cloud is evaluated using
simulations. The results of experiments, conducted on a Velodyne HDL-32e LiDAR
and a Ladybug3 camera under the proposed re-projection error metric,
qualitatively and quantitatively demonstrate the accuracy and stability of the
final extrinsic calibration parameters.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2017 05:58:02 GMT"
}
] | 2017-08-21T00:00:00 |
[
[
"Wang",
"Weimin",
""
],
[
"Sakurada",
"Ken",
""
],
[
"Kawaguchi",
"Nobuo",
""
]
] |
new_dataset
| 0.998535 |
1708.05543
|
Andrea Romanoni
|
Andrea Romanoni and Daniele Fiorenti and Matteo Matteucci
|
Mesh-based 3D Textured Urban Mapping
|
accepted at iros 2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the era of autonomous driving, urban mapping represents a core step to let
vehicles interact with the urban context. Successful mapping algorithms have
been proposed in the last decade building the map leveraging on data from a
single sensor. The focus of the system presented in this paper is twofold: the
joint estimation of a 3D map from lidar data and images, based on a 3D mesh,
and its texturing. Indeed, even if most surveying vehicles for mapping are
endowed by cameras and lidar, existing mapping algorithms usually rely on
either images or lidar data; moreover both image-based and lidar-based systems
often represent the map as a point cloud, while a continuous textured mesh
representation would be useful for visualization and navigation purposes. In
the proposed framework, we join the accuracy of the 3D lidar data, and the
dense information and appearance carried by the images, in estimating a
visibility consistent map upon the lidar measurements, and refining it
photometrically through the acquired images. We evaluate the proposed framework
against the KITTI dataset and we show the performance improvement with respect
to two state of the art urban mapping algorithms, and two widely used surface
reconstruction algorithms in Computer Graphics.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2017 09:43:10 GMT"
}
] | 2017-08-21T00:00:00 |
[
[
"Romanoni",
"Andrea",
""
],
[
"Fiorenti",
"Daniele",
""
],
[
"Matteucci",
"Matteo",
""
]
] |
new_dataset
| 0.950756 |
1708.05595
|
Huaxin Xiao
|
Huaxin Xiao, Jiashi Feng, Yunchao Wei, Maojun Zhang
|
Self-explanatory Deep Salient Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Salient object detection has seen remarkable progress driven by deep learning
techniques. However, most of deep learning based salient object detection
methods are black-box in nature and lacking in interpretability. This paper
proposes the first self-explanatory saliency detection network that explicitly
exploits low- and high-level features for salient object detection. We
demonstrate that such supportive clues not only significantly enhances
performance of salient object detection but also gives better justified
detection results. More specifically, we develop a multi-stage saliency encoder
to extract multi-scale features which contain both low- and high-level saliency
context. Dense short- and long-range connections are introduced to reuse these
features iteratively. Benefiting from the direct access to low- and high-level
features, the proposed saliency encoder can not only model the object context
but also preserve the boundary. Furthermore, a self-explanatory generator is
proposed to interpret how the proposed saliency encoder or other deep saliency
models making decisions. The generator simulates the absence of interesting
features by preventing these features from contributing to the saliency
classifier and estimates the corresponding saliency prediction without these
features. A comparison function, saliency explanation, is defined to measure
the prediction changes between deep saliency models and corresponding
generator. Through visualizing the differences, we can interpret the capability
of different deep neural networks based saliency detection models and
demonstrate that our proposed model indeed uses more reasonable structure for
salient object detection. Extensive experiments on five popular benchmark
datasets and the visualized saliency explanation demonstrate that the proposed
method provides new state-of-the-art.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2017 13:19:01 GMT"
}
] | 2017-08-21T00:00:00 |
[
[
"Xiao",
"Huaxin",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Wei",
"Yunchao",
""
],
[
"Zhang",
"Maojun",
""
]
] |
new_dataset
| 0.977265 |
1708.05618
|
Banu Kabakulak
|
Banu Kabakulak, Z. Caner Ta\c{s}k{\i}n, and Ali Emre Pusane
|
Optimization-Based Decoding Algorithms for LDPC Convolutional Codes in
Communication Systems
|
31 pages, 14 figures, 11 tables
| null | null | null |
cs.IT cs.DM math.IT math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a digital communication system, information is sent from one place to
another over a noisy communication channel. It may be possible to detect and
correct errors that occur during the transmission if one encodes the original
information by adding redundant bits. Low-density parity-check (LDPC)
convolutional codes, a member of the LDPC code family, encode the original
information to improve error correction capability. In practice these codes are
used to decode very long information sequences, where the information arrives
in subsequent packets over time, such as video streams. We consider the problem
of decoding the received information with minimum error from an optimization
point of view and investigate integer programming-based exact and heuristic
decoding algorithms for its solution. In particular, we consider relax-and-fix
heuristics that decode information in small windows. Computational results
indicate that our approaches identify near-optimal solutions significantly
faster than a commercial solver in high channel error rates. Our proposed
algorithms can find higher quality solutions compared with commonly used
iterative decoding heuristics.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2017 14:04:31 GMT"
}
] | 2017-08-21T00:00:00 |
[
[
"Kabakulak",
"Banu",
""
],
[
"Taşkın",
"Z. Caner",
""
],
[
"Pusane",
"Ali Emre",
""
]
] |
new_dataset
| 0.955038 |
1706.08997
|
Abhaykumar Kumbhar
|
Abhaykumar Kumbhar, Simran Singh, Ismail Guvenc
|
UAV Assisted Public Safety Communications with LTE-Advanced HetNets and
FeICIC
|
Accepted Proc. IEEE Annual International Symposium on Personal,
Indoor, and Mobile Radio Communications (PIMRC) 2017
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Establishing a reliable communication infrastructure at an emergency site is
a crucial task for mission-critical and real-time public safety communications
(PSC). To this end, use of the unmanned aerial vehicles (UAVs) has recently
received extensive interest for PSC to establish reliable connectivity in a
heterogeneous network (HetNet) environment. These UAVs can be deployed as
unmanned aerial base stations (UABSs) as a part of HetNet infrastructure. In
this article, we explore the role of agile UABSs in LTE-Advanced HetNets by
applying 3GPP Release 11 further-enhanced inter-cell interference coordination
(FeICIC) and cell range expansion (CRE) techniques. Through simulations, we
compare the system-wide 5th percentile spectral efficiency (SE) when UABSs are
deployed in a hexagonal grid and when their locations are optimized using a
genetic algorithm, while also jointly optimizing the CRE and the FeICIC
parameters. Our simulation results show that at optimized UABS locations, the
3GPP Release 11 FeICIC with reduced power subframes can provide considerably
better 5th percentile SE than the 3GPP Release~10 with almost blank subframes.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2017 18:21:39 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Aug 2017 02:47:58 GMT"
}
] | 2017-08-18T00:00:00 |
[
[
"Kumbhar",
"Abhaykumar",
""
],
[
"Singh",
"Simran",
""
],
[
"Guvenc",
"Ismail",
""
]
] |
new_dataset
| 0.98794 |
1708.00370
|
Behnaz Nojavanasghari
|
Behnaz Nojavanasghari, Charles. E. Hughes, Tadas Baltrusaitis, and
Louis-philippe Morency
|
Hand2Face: Automatic Synthesis and Recognition of Hand Over Face
Occlusions
|
Accepted to International Conference on Affective Computing and
Intelligent Interaction (ACII), 2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A person's face discloses important information about their affective state.
Although there has been extensive research on recognition of facial
expressions, the performance of existing approaches is challenged by facial
occlusions. Facial occlusions are often treated as noise and discarded in
recognition of affective states. However, hand over face occlusions can provide
additional information for recognition of some affective states such as
curiosity, frustration and boredom. One of the reasons that this problem has
not gained attention is the lack of naturalistic occluded faces that contain
hand over face occlusions as well as other types of occlusions. Traditional
approaches for obtaining affective data are time demanding and expensive, which
limits researchers in affective computing to work on small datasets. This
limitation affects the generalizability of models and deprives researchers from
taking advantage of recent advances in deep learning that have shown great
success in many fields but require large volumes of data. In this paper, we
first introduce a novel framework for synthesizing naturalistic facial
occlusions from an initial dataset of non-occluded faces and separate images of
hands, reducing the costly process of data collection and annotation. We then
propose a model for facial occlusion type recognition to differentiate between
hand over face occlusions and other types of occlusions such as scarves, hair,
glasses and objects. Finally, we present a model to localize hand over face
occlusions and identify the occluded regions of the face.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2017 14:46:09 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Aug 2017 02:06:44 GMT"
}
] | 2017-08-18T00:00:00 |
[
[
"Nojavanasghari",
"Behnaz",
""
],
[
"Hughes",
"Charles. E.",
""
],
[
"Baltrusaitis",
"Tadas",
""
],
[
"Morency",
"Louis-philippe",
""
]
] |
new_dataset
| 0.982666 |
1708.04680
|
Marcus Bloice
|
Marcus D. Bloice, Christof Stocker, Andreas Holzinger
|
Augmentor: An Image Augmentation Library for Machine Learning
| null | null | null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The generation of artificial data based on existing observations, known as
data augmentation, is a technique used in machine learning to improve model
accuracy, generalisation, and to control overfitting. Augmentor is a software
package, available in both Python and Julia versions, that provides a high
level API for the expansion of image data using a stochastic, pipeline-based
approach which effectively allows for images to be sampled from a distribution
of augmented images at runtime. Augmentor provides methods for most standard
augmentation practices as well as several advanced features such as
label-preserving, randomised elastic distortions, and provides many helper
functions for typical augmentation tasks used in machine learning.
|
[
{
"version": "v1",
"created": "Fri, 11 Aug 2017 11:19:44 GMT"
}
] | 2017-08-18T00:00:00 |
[
[
"Bloice",
"Marcus D.",
""
],
[
"Stocker",
"Christof",
""
],
[
"Holzinger",
"Andreas",
""
]
] |
new_dataset
| 0.997808 |
1708.05209
|
Khaled Abdelfadeel
|
Khaled Q. Abdelfadeel, Victor Cionca, Dirk Pesch
|
LSCHC: Layered Static Context Header Compression for LPWANs
|
6 pages, 8 figures, accepted fir publication in CHANTS workshop in
ACM MobiCom
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Supporting IPv6/UDP/CoAP protocols over Low Power Wide Area Networks (LPWANs)
can bring open networking, interconnection, and cooperation to this new type of
Internet of Things networks. However, accommodating these protocols over these
very low bandwidth networks requires efficient header compression schemes to
meet the limited frame size of these networks, where only one or two octets are
available to transmit all headers. Recently, the Internet Engineering Task
Force (IETF) LPWAN working group drafted the Static Context Header Compression
(SCHC), a new header compression scheme for LPWANs, which can provide a good
compression factor without complex synchronization. In this paper, we present
an implementation and evaluation of SCHC. We compare SCHC with IPHC, which also
targets constrained networks. Additionally, we propose an enhancement of SCHC,
Layered SCHC (LSCHC). LSCHC is a layered context that reduces memory
consumption and processing complexity, and adds flexibility when compressing
packets. Finally, we perform calculations to show the impact of SCHC/LSCHC on
an example LPWAN technology, e.g. LoRaWAN, from the point of view of
transmission time and reliability.
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2017 11:40:51 GMT"
}
] | 2017-08-18T00:00:00 |
[
[
"Abdelfadeel",
"Khaled Q.",
""
],
[
"Cionca",
"Victor",
""
],
[
"Pesch",
"Dirk",
""
]
] |
new_dataset
| 0.998806 |
1708.05233
|
Kiev Gama
|
Herbertt Diniz, Kiev Gama and Robson Fidalgo
|
A Semiotics-inspired Domain-Specific Modeling Language for Complex Event
Processing Rules
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Complex Event Processing (CEP) is one technique used to the handling data
flows. It allows pre-establishing conditions through rules and firing events
when certain patterns are found in the data flows. Because the rules for
defining such patterns are expressed with specific languages, users of these
technologies must understand the underlying expression syntax. To reduce the
complexity of writing CEP rules, some researchers are employing Domain Specific
Modeling Language (DSML) to provide modelling through visual tools. However,
existing approaches are ignoring some user design techniques that facilitate
usability. Thus, resulting tools eventually has become more complexes for
handling CEP than the conventional usage. Also, research on DSML tools
targeting CEP does not present any evaluation around usability. This article
proposes a DSML combined with visual notations techniques to create CEP rules
with a more intuitive development model adapted for the non-expert user needs.
The resulting tool was evaluated by non-expert users that were capable of
easily creating CEP rules without prior knowledge of the underlying expression
language.
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2017 12:37:18 GMT"
}
] | 2017-08-18T00:00:00 |
[
[
"Diniz",
"Herbertt",
""
],
[
"Gama",
"Kiev",
""
],
[
"Fidalgo",
"Robson",
""
]
] |
new_dataset
| 0.957703 |
1708.05347
|
Minjia Shi
|
MinJia Shi, Zahra Sepasdar, Rongsheng Wu, Patrick Sol\'e
|
Two weight $\mathbb{Z}_{p^k}$-codes, $p$ odd prime
|
This paper has been submitted in early 2016
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show that regular homogeneous two-weight $\mathbb{Z}_{p^k}$-codes where
$p$ is odd and $k\geqslant 2$ with dual Hamming distance at least four do not
exist. The proof relies on existence conditions for the strongly regular graph
built on the cosets of the dual code.
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2017 16:11:24 GMT"
}
] | 2017-08-18T00:00:00 |
[
[
"Shi",
"MinJia",
""
],
[
"Sepasdar",
"Zahra",
""
],
[
"Wu",
"Rongsheng",
""
],
[
"Solé",
"Patrick",
""
]
] |
new_dataset
| 0.9999 |
1708.05349
|
Aayush Bansal
|
Aayush Bansal and Yaser Sheikh and Deva Ramanan
|
PixelNN: Example-based Image Synthesis
|
Project Page: http://www.cs.cmu.edu/~aayushb/pixelNN/
| null | null | null |
cs.CV cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a simple nearest-neighbor (NN) approach that synthesizes
high-frequency photorealistic images from an "incomplete" signal such as a
low-resolution image, a surface normal map, or edges. Current state-of-the-art
deep generative models designed for such conditional image synthesis lack two
important things: (1) they are unable to generate a large set of diverse
outputs, due to the mode collapse problem. (2) they are not interpretable,
making it difficult to control the synthesized output. We demonstrate that NN
approaches potentially address such limitations, but suffer in accuracy on
small datasets. We design a simple pipeline that combines the best of both
worlds: the first stage uses a convolutional neural network (CNN) to maps the
input to a (overly-smoothed) image, and the second stage uses a pixel-wise
nearest neighbor method to map the smoothed output to multiple high-quality,
high-frequency outputs in a controllable manner. We demonstrate our approach
for various input modalities, and for various domains ranging from human faces
to cats-and-dogs to shoes and handbags.
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2017 16:13:42 GMT"
}
] | 2017-08-18T00:00:00 |
[
[
"Bansal",
"Aayush",
""
],
[
"Sheikh",
"Yaser",
""
],
[
"Ramanan",
"Deva",
""
]
] |
new_dataset
| 0.994959 |
1605.01091
|
Francois Meyer
|
Nathan D Monnig, Francois G Meyer
|
The Resistance Perturbation Distance: A Metric for the Analysis of
Dynamic Networks
| null | null | null | null |
cs.SI cs.DM physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To quantify the fundamental evolution of time-varying networks, and detect
abnormal behavior, one needs a notion of temporal difference that captures
significant organizational changes between two successive instants. In this
work, we propose a family of distances that can be tuned to quantify structural
changes occurring on a graph at different scales: from the local scale formed
by the neighbors of each vertex, to the largest scale that quantifies the
connections between clusters, or communities. Our approach results in the
definition of a true distance, and not merely a notion of similarity. We
propose fast (linear in the number of edges) randomized algorithms that can
quickly compute an approximation to the graph metric. The third contribution
involves a fast algorithm to increase the robustness of a network by optimally
decreasing the Kirchhoff index. Finally, we conduct several experiments on
synthetic graphs and real networks, and we demonstrate that we can detect
configurational changes that are directly related to the hidden variables
governing the evolution of dynamic networks.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2016 21:11:11 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2017 22:56:19 GMT"
}
] | 2017-08-17T00:00:00 |
[
[
"Monnig",
"Nathan D",
""
],
[
"Meyer",
"Francois G",
""
]
] |
new_dataset
| 0.970242 |
1611.08175
|
Shun Watanabe
|
Shun Watanabe
|
Neyman-Pearson Test for Zero-Rate Multiterminal Hypothesis Testing
|
34 pages, 8 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of zero-rate multiterminal hypothesis testing is revisited from
the perspective of information-spectrum approach and finite blocklength
analysis. A Neyman-Pearson-like test is proposed and its non-asymptotic
performance is clarified; for a short block length, it is numerically
determined that the proposed test is superior to the previously reported
Hoeffding-like test proposed by Han-Kobayashi. For a large deviation regime, it
is shown that our proposed test achieves an optimal trade-off between the type
I and type II exponents presented by Han-Kobayashi. Among the class of
symmetric (type based) testing schemes, when the type I error probability is
non-vanishing, the proposed test is optimal up to the second-order term of the
type II error exponent; the latter term is characterized in terms of the
variance of the projected relative entropy density. The information geometry
method plays an important role in the analysis as well as the construction of
the test.
|
[
{
"version": "v1",
"created": "Thu, 24 Nov 2016 13:23:08 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Feb 2017 04:40:17 GMT"
},
{
"version": "v3",
"created": "Wed, 16 Aug 2017 09:10:41 GMT"
}
] | 2017-08-17T00:00:00 |
[
[
"Watanabe",
"Shun",
""
]
] |
new_dataset
| 0.983554 |
1707.09716
|
Anastasia Mavridou
|
Anastasia Mavridou, Valentin Rutz, and Simon Bliudze
|
Coordination of Dynamic Software Components with JavaBIP
|
Technical report that accompanies the paper accepted at the 14th
International Conference on Formal Aspects of Component Software
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
JavaBIP allows the coordination of software components by clearly separating
the functional and coordination aspects of the system behavior. JavaBIP
implements the principles of the BIP component framework rooted in rigorous
operational semantics. Recent work both on BIP and JavaBIP allows the
coordination of static components defined prior to system deployment, i.e., the
architecture of the coordinated system is fixed in terms of its component
instances. Nevertheless, modern systems, often make use of components that can
register and deregister dynamically during system execution. In this paper, we
present an extension of JavaBIP that can handle this type of dynamicity. We use
first-order interaction logic to define synchronization constraints based on
component types. Additionally, we use directed graphs with edge coloring to
model dependencies among components that determine the validity of an online
system. We present the software architecture of our implementation, provide and
discuss performance evaluation results.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2017 04:19:04 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2017 23:35:58 GMT"
}
] | 2017-08-17T00:00:00 |
[
[
"Mavridou",
"Anastasia",
""
],
[
"Rutz",
"Valentin",
""
],
[
"Bliudze",
"Simon",
""
]
] |
new_dataset
| 0.990031 |
1708.04636
|
David Hallac
|
David Hallac, Abhijit Sharang, Rainer Stahlmann, Andreas Lamprecht,
Markus Huber, Martin Roehder, Rok Sosic, Jure Leskovec
|
Driver Identification Using Automobile Sensor Data from a Single Turn
| null | null | null | null |
cs.HC cs.SI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As automotive electronics continue to advance, cars are becoming more and
more reliant on sensors to perform everyday driving operations. These sensors
are omnipresent and help the car navigate, reduce accidents, and provide
comfortable rides. However, they can also be used to learn about the drivers
themselves. In this paper, we propose a method to predict, from sensor data
collected at a single turn, the identity of a driver out of a given set of
individuals. We cast the problem in terms of time series classification, where
our dataset contains sensor readings at one turn, repeated several times by
multiple drivers. We build a classifier to find unique patterns in each
individual's driving style, which are visible in the data even on such a short
road segment. To test our approach, we analyze a new dataset collected by AUDI
AG and Audi Electronics Venture, where a fleet of test vehicles was equipped
with automotive data loggers storing all sensor readings on real roads. We show
that turns are particularly well-suited for detecting variations across
drivers, especially when compared to straightaways. We then focus on the 12
most frequently made turns in the dataset, which include rural, urban, highway
on-ramps, and more, obtaining accurate identification results and learning
useful insights about driver behavior in a variety of settings.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2017 17:15:00 GMT"
}
] | 2017-08-17T00:00:00 |
[
[
"Hallac",
"David",
""
],
[
"Sharang",
"Abhijit",
""
],
[
"Stahlmann",
"Rainer",
""
],
[
"Lamprecht",
"Andreas",
""
],
[
"Huber",
"Markus",
""
],
[
"Roehder",
"Martin",
""
],
[
"Sosic",
"Rok",
""
],
[
"Leskovec",
"Jure",
""
]
] |
new_dataset
| 0.998171 |
1708.04656
|
Mahdi Miraz
|
Sajid Khan, Md Al Shayokh, Mahdi H. Miraz and Monir Bhuiyan
|
A Framework for Android Based Shopping Mall Applications
| null |
Proceedings of the International Conference on eBusiness,
eCommerce, eManagement, eLearning and eGovernance (IC5E 2014), held at
University of Greenwich, London, UK, 30-31 July 2014, pp. 27-32
| null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Android is Google's latest open source software platform for mobile devices
which has already attained enormous popularity. The purpose of this paper is to
describe the development of mobile application for shopping mall using Android
platform. A prototype was developed for the shoppers of Bashundhara Shopping
Mall of Bangladesh. This prototype will serve as a framework for any such
applications (apps). The paper presents a practical demonstration of how to
integrate shops' information, such as names, categories, locations,
descriptions, floor layout and so forth, with map module via an android
application. A summary of survey results of the related literature and projects
have also been included. Critical Evaluation of the prototype along with future
research and development plan has been revealed. The paper will serve as a
guideline for the researchers and developers to introduce and develop similar
apps.
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2017 18:56:47 GMT"
}
] | 2017-08-17T00:00:00 |
[
[
"Khan",
"Sajid",
""
],
[
"Shayokh",
"Md Al",
""
],
[
"Miraz",
"Mahdi H.",
""
],
[
"Bhuiyan",
"Monir",
""
]
] |
new_dataset
| 0.998768 |
1708.04672
|
Andrey Kurenkov
|
Andrey Kurenkov, Jingwei Ji, Animesh Garg, Viraj Mehta, JunYoung Gwak,
Christopher Choy, Silvio Savarese
|
DeformNet: Free-Form Deformation Network for 3D Shape Reconstruction
from a Single Image
|
11 pages, 9 figures, NIPS
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D reconstruction from a single image is a key problem in multiple
applications ranging from robotic manipulation to augmented reality. Prior
methods have tackled this problem through generative models which predict 3D
reconstructions as voxels or point clouds. However, these methods can be
computationally expensive and miss fine details. We introduce a new
differentiable layer for 3D data deformation and use it in DeformNet to learn a
model for 3D reconstruction-through-deformation. DeformNet takes an image
input, searches the nearest shape template from a database, and deforms the
template to match the query image. We evaluate our approach on the ShapeNet
dataset and show that - (a) the Free-Form Deformation layer is a powerful new
building block for Deep Learning models that manipulate 3D data (b) DeformNet
uses this FFD layer combined with shape retrieval for smooth and
detail-preserving 3D reconstruction of qualitatively plausible point clouds
with respect to a single query image (c) compared to other state-of-the-art 3D
reconstruction methods, DeformNet quantitatively matches or outperforms their
benchmarks by significant margins. For more information, visit:
https://deformnet-site.github.io/DeformNet-website/ .
|
[
{
"version": "v1",
"created": "Fri, 11 Aug 2017 00:43:19 GMT"
}
] | 2017-08-17T00:00:00 |
[
[
"Kurenkov",
"Andrey",
""
],
[
"Ji",
"Jingwei",
""
],
[
"Garg",
"Animesh",
""
],
[
"Mehta",
"Viraj",
""
],
[
"Gwak",
"JunYoung",
""
],
[
"Choy",
"Christopher",
""
],
[
"Savarese",
"Silvio",
""
]
] |
new_dataset
| 0.991658 |
1708.04677
|
Nikolaus Correll
|
Nikolaus Correll, Prabal Dutta, Richard Han and Kristofer Pister
|
New Directions: Wireless Robotic Materials
|
To appear at SenSys 2017
| null | null | null |
cs.RO cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe opportunities and challenges with wireless robotic materials.
Robotic materials are multi-functional composites that tightly integrate
sensing, actuation, computation and communication to create smart composites
that can sense their environment and change their physical properties in an
arbitrary programmable manner. Computation and communication in such materials
are based on miniature, possibly wireless, devices that are scattered in the
material and interface with sensors and actuators inside the material. Whereas
routing and processing of information within the material build upon results
from the field of sensor networks, robotic materials are pushing the limits of
sensor networks in both size (down to the order of microns) and numbers of
devices (up to the order of millions). In order to solve the algorithmic and
systems challenges of such an approach, which will involve not only computer
scientists, but also roboticists, chemists and material scientists, the
community requires a common platform - much like the "Mote" that bootstrapped
the widespread adoption of the field of sensor networks - that is small,
provides ample of computation, is equipped with basic networking
functionalities, and preferably can be powered wirelessly.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2017 20:29:06 GMT"
}
] | 2017-08-17T00:00:00 |
[
[
"Correll",
"Nikolaus",
""
],
[
"Dutta",
"Prabal",
""
],
[
"Han",
"Richard",
""
],
[
"Pister",
"Kristofer",
""
]
] |
new_dataset
| 0.999594 |
1708.04681
|
Arman Cohan
|
Arman Cohan, Allan Fong, Raj Ratwani, Nazli Goharian
|
Identifying Harm Events in Clinical Care through Medical Narratives
|
ACM-BCB 2017
| null |
10.1145/3107411.3107485
| null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Preventable medical errors are estimated to be among the leading causes of
injury and death in the United States. To prevent such errors, healthcare
systems have implemented patient safety and incident reporting systems. These
systems enable clinicians to report unsafe conditions and cases where patients
have been harmed due to errors in medical care. These reports are narratives in
natural language and while they provide detailed information about the
situation, it is non-trivial to perform large scale analysis for identifying
common causes of errors and harm to the patients. In this work, we present a
method based on attentive convolutional and recurrent networks for identifying
harm events in patient care and categorize the harm based on its severity
level. We demonstrate that our methods can significantly improve the
performance over existing methods in identifying harm in clinical care.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2017 20:38:37 GMT"
}
] | 2017-08-17T00:00:00 |
[
[
"Cohan",
"Arman",
""
],
[
"Fong",
"Allan",
""
],
[
"Ratwani",
"Raj",
""
],
[
"Goharian",
"Nazli",
""
]
] |
new_dataset
| 0.982454 |
1708.04716
|
Sing Kiong Nguang
|
Paul Ryuji Chuen-Ying Huang, Sing Kiong Nguang and Ashton Partridge
|
Self-Powered Wireless Sensor
| null | null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper develops a novel power harvesting system to harvest ambient RF
energy to power a wireless sensor. Harvesting ambient RF energy is a very
difficult task as the power levels are extremely weak. Simulation results show
zero threshold MOSFETs are essential in the RF to DC conversion process. 0.5VDC
at the output of the RF to DC conversion stage is the minimum voltage which
must be achieved for the micro-power sensor circuitry to operate. The weakest
power level the proposed system can successfully harvest is -37dBm. The
measured available power from the FM band has been measured to fluctuate
between -33 to -43dBm using a ribbon FM dipole antenna. Ambient RF energy would
best be used in conjunction with other forms of harvested ambient energy to
increase diversity and dependability. The potential economic and environmental
benefits make such endeavors truly worthwhile.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2017 23:04:08 GMT"
}
] | 2017-08-17T00:00:00 |
[
[
"Huang",
"Paul Ryuji Chuen-Ying",
""
],
[
"Nguang",
"Sing Kiong",
""
],
[
"Partridge",
"Ashton",
""
]
] |
new_dataset
| 0.995829 |
1708.04748
|
Steven Goldfeder
|
Steven Goldfeder, Harry Kalodner, Dillon Reisman, Arvind Narayanan
|
When the cookie meets the blockchain: Privacy risks of web payments via
cryptocurrencies
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show how third-party web trackers can deanonymize users of
cryptocurrencies. We present two distinct but complementary attacks. On most
shopping websites, third party trackers receive information about user
purchases for purposes of advertising and analytics. We show that, if the user
pays using a cryptocurrency, trackers typically possess enough information
about the purchase to uniquely identify the transaction on the blockchain, link
it to the user's cookie, and further to the user's real identity. Our second
attack shows that if the tracker is able to link two purchases of the same user
to the blockchain in this manner, it can identify the user's entire cluster of
addresses and transactions on the blockchain, even if the user employs
blockchain anonymity techniques such as CoinJoin. The attacks are passive and
hence can be retroactively applied to past purchases. We discuss several
mitigations, but none are perfect.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2017 02:18:03 GMT"
}
] | 2017-08-17T00:00:00 |
[
[
"Goldfeder",
"Steven",
""
],
[
"Kalodner",
"Harry",
""
],
[
"Reisman",
"Dillon",
""
],
[
"Narayanan",
"Arvind",
""
]
] |
new_dataset
| 0.987649 |
1708.04774
|
Satyam Dwivedi
|
Satyam Dwivedi, John Olof Nilsson, Panos Papadimitratos, Peter
H\"andel
|
CLIMEX: A Wireless Physical Layer Security Protocol Based on Clocked
Impulse Exchanges
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A novel method and protocol establishing common secrecy based on physical
parameters between two users is proposed. The four physical parameters of users
are their clock frequencies, their relative clock phases and the distance
between them. The protocol proposed between two users is backed by theoretical
model for the measurements. Further, estimators are proposed to estimate secret
physical parameters. Physically exchanged parameters are shown to be secure by
virtue of their non-observability to adversaries. Under a simplified analysis
based on a testbed settings, it is shown that 38 bits of common secrecy can be
derived for one run of the proposed protocol among users. The method proposed
is also robust against various kinds of active timing attacks and active
impersonating adversaries.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2017 05:37:18 GMT"
}
] | 2017-08-17T00:00:00 |
[
[
"Dwivedi",
"Satyam",
""
],
[
"Nilsson",
"John Olof",
""
],
[
"Papadimitratos",
"Panos",
""
],
[
"Händel",
"Peter",
""
]
] |
new_dataset
| 0.990014 |
1708.04782
|
Timothy Lillicrap
|
Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander
Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich K\"uttler, John
Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen,
Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy
Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence,
Anders Ekermo, Jacob Repp, Rodney Tsing
|
StarCraft II: A New Challenge for Reinforcement Learning
|
Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2017 06:20:52 GMT"
}
] | 2017-08-17T00:00:00 |
[
[
"Vinyals",
"Oriol",
""
],
[
"Ewalds",
"Timo",
""
],
[
"Bartunov",
"Sergey",
""
],
[
"Georgiev",
"Petko",
""
],
[
"Vezhnevets",
"Alexander Sasha",
""
],
[
"Yeo",
"Michelle",
""
],
[
"Makhzani",
"Alireza",
""
],
[
"Küttler",
"Heinrich",
""
],
[
"Agapiou",
"John",
""
],
[
"Schrittwieser",
"Julian",
""
],
[
"Quan",
"John",
""
],
[
"Gaffney",
"Stephen",
""
],
[
"Petersen",
"Stig",
""
],
[
"Simonyan",
"Karen",
""
],
[
"Schaul",
"Tom",
""
],
[
"van Hasselt",
"Hado",
""
],
[
"Silver",
"David",
""
],
[
"Lillicrap",
"Timothy",
""
],
[
"Calderone",
"Kevin",
""
],
[
"Keet",
"Paul",
""
],
[
"Brunasso",
"Anthony",
""
],
[
"Lawrence",
"David",
""
],
[
"Ekermo",
"Anders",
""
],
[
"Repp",
"Jacob",
""
],
[
"Tsing",
"Rodney",
""
]
] |
new_dataset
| 0.999306 |
1708.04864
|
Emanuele Rodaro
|
Emanuele Rodaro
|
Synchronizing automata and the language of minimal reset words
|
17 pages
| null | null | null |
cs.FL math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study a connection between synchronizing automata and its set $M$ of
minimal reset words, i.e., such that no proper factor is a reset word. We first
show that any synchronizing automaton having the set of minimal reset words
whose set of factors does not contain a word of length at most
$\frac{1}{4}\min\{|u|: u\in I\}+\frac{1}{16}$ has a reset word of length at
most $(n-\frac{1}{2})^{2}$ In the last part of the paper we focus on the
existence of synchronizing automata with a given ideal $I$ that serves as the
set of reset words. To this end, we introduce the notion of the tail structure
of the (not necessarily regular) ideal $I=\Sigma^{*}M\Sigma^{*}$. With this
tool, we first show the existence of an infinite strongly connected
synchronizing automaton $\mathcal{A}$ having $I$ as the set of reset words and
such that every other strongly connected synchronizing automaton having $I$ as
the set of reset words is an homomorphic image of $\mathcal{A}$. Finally, we
show that for any non-unary regular ideal $I$ there is a strongly connected
synchronizing automaton having $I$ as the set of reset words with at most
$(km^{k})2^{km^{k}n}$ states, where $k=|\Sigma|$, $m$ is the length of a
shortest word in $M$, and $n$ is the dimension of the smallest automaton
recognizing $M$ (state complexity of $M$). This automaton is computable and we
show an algorithm to compute it in time $\mathcal{O}((k^{2}m^{k})2^{km^{k}n})$.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2017 12:52:47 GMT"
}
] | 2017-08-17T00:00:00 |
[
[
"Rodaro",
"Emanuele",
""
]
] |
new_dataset
| 0.996984 |
1708.04871
|
Christian Gorke
|
Christian A. Gorke, Frederik Armknecht
|
SMAUG: Secure Mobile Authentication Using Gestures
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present SMAUG (Secure Mobile Authentication Using Gestures), a novel
biometric assisted authentication algorithm for mobile devices that is solely
based on data collected from multiple sensors that are usually installed on
modern devices -- touch screen, gyroscope and accelerometer. As opposed to
existing approaches, our system supports a fully flexible user input such as
free-form gestures, multi-touch, and arbitrary amount of strokes.
Our experiments confirm that this approach provides a high level of
robustness and security. More precisely, in 77% of all our test cases over all
gestures considered, a user has been correctly identified during the first
authentication attempt and in 99% after the third attempt, while an attacker
has been detected in 97% of all test cases. As an example, gestures that have a
good balance between complexity and usability, e.g., drawing a two parallel
lines using two fingers at the same time, 100% success rate after three login
attempts and 97% impostor detection rate were given. We stress that we consider
the strongest possible attacker model: an attacker is not only allowed to
monitor the legitimate user during the authentication process, but also
receives additional information on the biometric properties, for example
pressure, speed, rotation, and acceleration. We see this method as a
significant step beyond existing authentication methods that can be deployed
directly to devices in use without the need of additional hardware.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2017 13:13:11 GMT"
}
] | 2017-08-17T00:00:00 |
[
[
"Gorke",
"Christian A.",
""
],
[
"Armknecht",
"Frederik",
""
]
] |
new_dataset
| 0.999579 |
1708.04879
|
Elizabeth Lucas
|
Elizabeth Lucas
|
Interstitial Content Detection
| null | null | null | null |
cs.CY cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Interstitial content is online content which grays out, or otherwise obscures
the main page content. In this technical report, we discuss exploratory
research into detecting the presence of interstitial content in web pages. We
discuss the use of computer vision techniques to detect interstitials, and the
potential use of these techniques to provide a labelled dataset for machine
learning.
|
[
{
"version": "v1",
"created": "Sun, 13 Aug 2017 19:09:02 GMT"
}
] | 2017-08-17T00:00:00 |
[
[
"Lucas",
"Elizabeth",
""
]
] |
new_dataset
| 0.972967 |
1204.3384
|
Zhao CHen Mr.
|
Zhao Chen, Mai Xu, Luiguo Yin and Jianhua Lu
|
Unequal Error Protected JPEG 2000 Broadcast Scheme with Progressive
Fountain Codes
| null | null |
10.1109/WCSP.2011.6096843
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a novel scheme, based on progressive fountain codes, for
broadcasting JPEG 2000 multimedia. In such a broadcast scheme, progressive
resolution levels of images/video have been unequally protected when
transmitted using the proposed progressive fountain codes. With progressive
fountain codes applied in the broadcast scheme, the resolutions of images (JPEG
2000) or videos (MJPEG 2000) received by different users can be automatically
adaptive to their channel qualities, i.e. the users with good channel qualities
are possible to receive the high resolution images/vedio while the users with
bad channel qualities may receive low resolution images/vedio. Finally, the
performance of the proposed scheme is evaluated with the MJPEG 2000 broadcast
prototype.
|
[
{
"version": "v1",
"created": "Mon, 16 Apr 2012 07:39:56 GMT"
}
] | 2017-08-16T00:00:00 |
[
[
"Chen",
"Zhao",
""
],
[
"Xu",
"Mai",
""
],
[
"Yin",
"Luiguo",
""
],
[
"Lu",
"Jianhua",
""
]
] |
new_dataset
| 0.997173 |
1701.00247
|
Hongwei Liu
|
Hongwei Liu, Youcef Maouche
|
Some Repeated-Root Constacyclic Codes over Galois Rings
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Codes over Galois rings have been studied extensively during the last three
decades. Negacyclic codes over $GR(2^a,m)$ of length $2^s$ have been
characterized: the ring $\mathcal{R}_2(a,m,-1)= \frac{GR(2^a,m)[x]}{\langle
x^{2^s}+1\rangle}$ is a chain ring. Furthermore, these results have been
generalized to $\lambda$-constacyclic codes for any unit $\lambda$ of the form
$4z-1$, $z\in GR(2^a, m)$. In this paper, we study more general cases and
investigate all cases where $\mathcal{R}_p(a,m,\gamma)=
\frac{GR(p^a,m)[x]}{\langle x^{p^s}-\gamma \rangle}$ is a chain ring. In
particular, necessary and sufficient conditions for the ring
$\mathcal{R}_p(a,m,\gamma)$ to be a chain ring are obtained. In addition, by
using this structure we investigate all $\gamma$-constacyclic codes over
$GR(p^a,m)$ when $\mathcal{R}_p(a,m,\gamma)$ is a chain ring. Necessary and
sufficient conditions for the existence of self-orthogonal and self-dual
$\gamma$-constacyclic codes are also provided. Among others, for any prime $p$,
the structure of $\mathcal{R}_p(a,m,\gamma)=\frac{GR(p^a,m)[x]}{\langle
x^{p^s}-\gamma\rangle}$ is used to establish the Hamming and homogeneous
distances of $\gamma$-constacyclic codes.
|
[
{
"version": "v1",
"created": "Sun, 1 Jan 2017 14:10:39 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2017 04:10:31 GMT"
}
] | 2017-08-16T00:00:00 |
[
[
"Liu",
"Hongwei",
""
],
[
"Maouche",
"Youcef",
""
]
] |
new_dataset
| 0.999639 |
1706.00298
|
Andrea Tassi
|
Andrea Tassi, Malcolm Egan, Robert J. Piechocki and Andrew Nix
|
Modeling and Design of Millimeter-Wave Networks for Highway Vehicular
Communication
|
Accepted for publication in IEEE Transactions on Vehicular Technology
-- Connected Vehicles Series
| null | null | null |
cs.IT cs.PF math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Connected and autonomous vehicles will play a pivotal role in future
Intelligent Transportation Systems (ITSs) and smart cities, in general.
High-speed and low-latency wireless communication links will allow
municipalities to warn vehicles against safety hazards, as well as support
cloud-driving solutions to drastically reduce traffic jams and air pollution.
To achieve these goals, vehicles need to be equipped with a wide range of
sensors generating and exchanging high rate data streams. Recently, millimeter
wave (mmWave) techniques have been introduced as a means of fulfilling such
high data rate requirements. In this paper, we model a highway communication
network and characterize its fundamental link budget metrics. In particular, we
specifically consider a network where vehicles are served by mmWave Base
Stations (BSs) deployed alongside the road. To evaluate our highway network, we
develop a new theoretical model that accounts for a typical scenario where
heavy vehicles (such as buses and lorries) in slow lanes obstruct Line-of-Sight
(LOS) paths of vehicles in fast lanes and, hence, act as blockages. Using tools
from stochastic geometry, we derive approximations for the
Signal-to-Interference-plus-Noise Ratio (SINR) outage probability, as well as
the probability that a user achieves a target communication rate (rate coverage
probability). Our analysis provides new design insights for mmWave highway
communication networks. In considered highway scenarios, we show that reducing
the horizontal beamwidth from $90^\circ$ to $30^\circ$ determines a minimal
reduction in the SINR outage probability (namely, $4 \cdot 10^{-2}$ at
maximum). Also, unlike bi-dimensional mmWave cellular networks, for small BS
densities (namely, one BS every $500$ m) it is still possible to achieve an
SINR outage probability smaller than $0.2$.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2017 13:46:55 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Jun 2017 11:01:40 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Jul 2017 13:04:53 GMT"
},
{
"version": "v4",
"created": "Thu, 10 Aug 2017 12:40:48 GMT"
},
{
"version": "v5",
"created": "Tue, 15 Aug 2017 15:35:32 GMT"
}
] | 2017-08-16T00:00:00 |
[
[
"Tassi",
"Andrea",
""
],
[
"Egan",
"Malcolm",
""
],
[
"Piechocki",
"Robert J.",
""
],
[
"Nix",
"Andrew",
""
]
] |
new_dataset
| 0.950857 |
1708.02114
|
Jiun-Jie Wang
|
Jiun-Jie Wang
|
Layouts for Plane Graphs on Constant Number of Tracks
|
arXiv admin note: text overlap with arXiv:1302.0304 by other authors
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A \emph{$k$-track} layout of a graph consists of a vertex $k$ colouring, and
a total order of each vertex colour class, such that between each pair of
colour classes no two edges cross. A \emph{$k$-queue} layout of a graph
consists of a total order of the vertices, and a partition of the edges into
$k$ sets such that no two edges that are in the same set are nested with
respect to the vertex ordering. The \emph{track number} (\emph{queue number})
of a graph $G$, is the minimum $k$ such that $G$ has a $k$-track ($k$-queue)
layout. This paper proves that every $n$-vertex plane graph has constant-bound
track and queue numbers. The result implies that every plane has a 3D
crossing-free straight-line grid drawing in $O(n)$ volume. The proof utilizes a
novel graph partition technique.
|
[
{
"version": "v1",
"created": "Mon, 7 Aug 2017 13:35:58 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Aug 2017 16:03:05 GMT"
}
] | 2017-08-16T00:00:00 |
[
[
"Wang",
"Jiun-Jie",
""
]
] |
new_dataset
| 0.957252 |
1708.04308
|
Yuxin Peng
|
Xin Huang, Yuxin Peng and Mingkuan Yuan
|
MHTN: Modal-adversarial Hybrid Transfer Network for Cross-modal
Retrieval
|
12 pages, submitted to IEEE Transactions on Cybernetics
| null | null | null |
cs.MM cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cross-modal retrieval has drawn wide interest for retrieval across different
modalities of data. However, existing methods based on DNN face the challenge
of insufficient cross-modal training data, which limits the training
effectiveness and easily leads to overfitting. Transfer learning is for
relieving the problem of insufficient training data, but it mainly focuses on
knowledge transfer only from large-scale datasets as single-modal source domain
to single-modal target domain. Such large-scale single-modal datasets also
contain rich modal-independent semantic knowledge that can be shared across
different modalities. Besides, large-scale cross-modal datasets are very
labor-consuming to collect and label, so it is significant to fully exploit the
knowledge in single-modal datasets for boosting cross-modal retrieval. This
paper proposes modal-adversarial hybrid transfer network (MHTN), which to the
best of our knowledge is the first work to realize knowledge transfer from
single-modal source domain to cross-modal target domain, and learn cross-modal
common representation. It is an end-to-end architecture with two subnetworks:
(1) Modal-sharing knowledge transfer subnetwork is proposed to jointly transfer
knowledge from a large-scale single-modal dataset in source domain to all
modalities in target domain with a star network structure, which distills
modal-independent supplementary knowledge for promoting cross-modal common
representation learning. (2) Modal-adversarial semantic learning subnetwork is
proposed to construct an adversarial training mechanism between common
representation generator and modality discriminator, making the common
representation discriminative for semantics but indiscriminative for modalities
to enhance cross-modal semantic consistency during transfer process.
Comprehensive experiments on 4 widely-used datasets show its effectiveness and
generality.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2017 07:50:52 GMT"
}
] | 2017-08-16T00:00:00 |
[
[
"Huang",
"Xin",
""
],
[
"Peng",
"Yuxin",
""
],
[
"Yuan",
"Mingkuan",
""
]
] |
new_dataset
| 0.982232 |
1708.04341
|
Wayne Hayes
|
Adib Hassan, Po-Chien Chung, Wayne B. Hayes
|
Graphettes: Constant-time determination of graphlet and orbit identity
including (possibly disconnected) graphlets up to size 8
|
13 pages, 4 figures, 2 tables. Accepted to PLOS ONE
| null | null | null |
cs.DS q-bio.MN q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graphlets are small connected induced subgraphs of a larger graph $G$.
Graphlets are now commonly used to quantify local and global topology of
networks in the field. Methods exist to exhaustively enumerate all graphlets
(and their orbits) in large networks as efficiently as possible using orbit
counting equations. However, the number of graphlets in $G$ is exponential in
both the number of nodes and edges in $G$. Enumerating them all is already
unacceptably expensive on existing large networks, and the problem will only
get worse as networks continue to grow in size and density. Here we introduce
an efficient method designed to aid statistical sampling of graphlets up to
size $k=8$ from a large network. We define graphettes as the generalization of
graphlets allowing for disconnected graphlets. Given a particular (undirected)
graphette $g$, we introduce the idea of the canonical graphette $\mathcal K(g)$
as a representative member of the isomorphism group $Iso(g)$ of $g$. We compute
the mapping $\mathcal K$, in the form of a lookup table, from all
$2^{k(k-1)/2}$ undirected graphettes $g$ of size $k\le 8$ to their canonical
representatives $\mathcal K(g)$, as well as the permutation that transforms $g$
to $\mathcal K(g)$. We also compute all automorphism orbits for each canonical
graphette. Thus, given any $k\le 8$ nodes in a graph $G$, we can in constant
time infer which graphette it is, as well as which orbit each of the $k$ nodes
belongs to. Sampling a large number $N$ of such $k$-sets of nodes provides an
approximation of both the distribution of graphlets and orbits across $G$, and
the orbit degree vector at each node.
|
[
{
"version": "v1",
"created": "Mon, 14 Aug 2017 22:06:44 GMT"
}
] | 2017-08-16T00:00:00 |
[
[
"Hassan",
"Adib",
""
],
[
"Chung",
"Po-Chien",
""
],
[
"Hayes",
"Wayne B.",
""
]
] |
new_dataset
| 0.975943 |
1708.04352
|
Peter Henderson
|
Peter Henderson, Wei-Di Chang, Florian Shkurti, Johanna Hansen, David
Meger, Gregory Dudek
|
Benchmark Environments for Multitask Learning in Continuous Domains
|
Accepted at Lifelong Learning: A Reinforcement Learning Approach
Workshop @ ICML, Sydney, Australia, 2017
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As demand drives systems to generalize to various domains and problems, the
study of multitask, transfer and lifelong learning has become an increasingly
important pursuit. In discrete domains, performance on the Atari game suite has
emerged as the de facto benchmark for assessing multitask learning. However, in
continuous domains there is a lack of agreement on standard multitask
evaluation environments which makes it difficult to compare different
approaches fairly. In this work, we describe a benchmark set of tasks that we
have developed in an extendable framework based on OpenAI Gym. We run a simple
baseline using Trust Region Policy Optimization and release the framework
publicly to be expanded and used for the systematic comparison of multitask,
transfer, and lifelong learning in continuous domains.
|
[
{
"version": "v1",
"created": "Mon, 14 Aug 2017 22:55:03 GMT"
}
] | 2017-08-16T00:00:00 |
[
[
"Henderson",
"Peter",
""
],
[
"Chang",
"Wei-Di",
""
],
[
"Shkurti",
"Florian",
""
],
[
"Hansen",
"Johanna",
""
],
[
"Meger",
"David",
""
],
[
"Dudek",
"Gregory",
""
]
] |
new_dataset
| 0.998989 |
1708.04557
|
Slava Mikhaylov
|
Alexander Herzog and Slava J. Mikhaylov
|
Database of Parliamentary Speeches in Ireland, 1919-2013
|
The database is made available on the Harvard Dataverse at
http://dx.doi.org/10.7910/DVN/6MZN76
| null | null | null |
cs.CL cs.SI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a database of parliamentary debates that contains the complete
record of parliamentary speeches from D\'ail \'Eireann, the lower house and
principal chamber of the Irish parliament, from 1919 to 2013. In addition, the
database contains background information on all TDs (Teachta D\'ala, members of
parliament), such as their party affiliations, constituencies and office
positions. The current version of the database includes close to 4.5 million
speeches from 1,178 TDs. The speeches were downloaded from the official
parliament website and further processed and parsed with a Python script.
Background information on TDs was collected from the member database of the
parliament website. Data on cabinet positions (ministers and junior ministers)
was collected from the official website of the government. A record linkage
algorithm and human coders were used to match TDs and ministers.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2017 15:34:33 GMT"
}
] | 2017-08-16T00:00:00 |
[
[
"Herzog",
"Alexander",
""
],
[
"Mikhaylov",
"Slava J.",
""
]
] |
new_dataset
| 0.999714 |
1612.01669
|
Jonghwan Mun
|
Jonghwan Mun, Paul Hongsuck Seo, Ilchae Jung, Bohyung Han
|
MarioQA: Answering Questions by Watching Gameplay Videos
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a framework to analyze various aspects of models for video
question answering (VideoQA) using customizable synthetic datasets, which are
constructed automatically from gameplay videos. Our work is motivated by the
fact that existing models are often tested only on datasets that require
excessively high-level reasoning or mostly contain instances accessible through
single frame inferences. Hence, it is difficult to measure capacity and
flexibility of trained models, and existing techniques often rely on ad-hoc
implementations of deep neural networks without clear insight into datasets and
models. We are particularly interested in understanding temporal relationships
between video events to solve VideoQA problems; this is because reasoning
temporal dependency is one of the most distinct components in videos from
images. To address this objective, we automatically generate a customized
synthetic VideoQA dataset using {\em Super Mario Bros.} gameplay videos so that
it contains events with different levels of reasoning complexity. Using the
dataset, we show that properly constructed datasets with events in various
complexity levels are critical to learn effective models and improve overall
performance.
|
[
{
"version": "v1",
"created": "Tue, 6 Dec 2016 05:23:52 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Aug 2017 07:49:55 GMT"
}
] | 2017-08-15T00:00:00 |
[
[
"Mun",
"Jonghwan",
""
],
[
"Seo",
"Paul Hongsuck",
""
],
[
"Jung",
"Ilchae",
""
],
[
"Han",
"Bohyung",
""
]
] |
new_dataset
| 0.996825 |
1703.00096
|
Zhenyao Zhu
|
Hairong Liu, Zhenyao Zhu, Xiangang Li, Sanjeev Satheesh
|
Gram-CTC: Automatic Unit Selection and Target Decomposition for Sequence
Labelling
|
Published at ICML 2017
| null | null | null |
cs.CL cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most existing sequence labelling models rely on a fixed decomposition of a
target sequence into a sequence of basic units. These methods suffer from two
major drawbacks: 1) the set of basic units is fixed, such as the set of words,
characters or phonemes in speech recognition, and 2) the decomposition of
target sequences is fixed. These drawbacks usually result in sub-optimal
performance of modeling sequences. In this pa- per, we extend the popular CTC
loss criterion to alleviate these limitations, and propose a new loss function
called Gram-CTC. While preserving the advantages of CTC, Gram-CTC automatically
learns the best set of basic units (grams), as well as the most suitable
decomposition of tar- get sequences. Unlike CTC, Gram-CTC allows the model to
output variable number of characters at each time step, which enables the model
to capture longer term dependency and improves the computational efficiency. We
demonstrate that the proposed Gram-CTC improves CTC in terms of both
performance and efficiency on the large vocabulary speech recognition task at
multiple scales of data, and that with Gram-CTC we can outperform the
state-of-the-art on a standard speech benchmark.
|
[
{
"version": "v1",
"created": "Wed, 1 Mar 2017 00:59:17 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Aug 2017 00:02:26 GMT"
}
] | 2017-08-15T00:00:00 |
[
[
"Liu",
"Hairong",
""
],
[
"Zhu",
"Zhenyao",
""
],
[
"Li",
"Xiangang",
""
],
[
"Satheesh",
"Sanjeev",
""
]
] |
new_dataset
| 0.972098 |
1707.01736
|
Emiel van Miltenburg
|
Emiel van Miltenburg, Desmond Elliott, Piek Vossen
|
Cross-linguistic differences and similarities in image descriptions
|
Accepted for INLG 2017, Santiago de Compostela, Spain, 4-7 September,
2017. Camera-ready version. See the ACL anthology for full bibliographic
information
| null | null | null |
cs.CL cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic image description systems are commonly trained and evaluated on
large image description datasets. Recently, researchers have started to collect
such datasets for languages other than English. An unexplored question is how
different these datasets are from English and, if there are any differences,
what causes them to differ. This paper provides a cross-linguistic comparison
of Dutch, English, and German image descriptions. We find that these
descriptions are similar in many respects, but the familiarity of crowd workers
with the subjects of the images has a noticeable influence on description
specificity.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2017 11:53:41 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Aug 2017 10:18:44 GMT"
}
] | 2017-08-15T00:00:00 |
[
[
"van Miltenburg",
"Emiel",
""
],
[
"Elliott",
"Desmond",
""
],
[
"Vossen",
"Piek",
""
]
] |
new_dataset
| 0.971225 |
1707.08271
|
Su Min Kim Prof.
|
Han Seung Jang, Su Min Kim, Hong-Shik Park, Dan Keun Sung
|
A Preamble Collision Resolution Scheme via Tagged Preambles for Cellular
IoT/M2M Communications
|
5 page, 5 figures, revised and resubmitted for publication to IEEE
Transactions on Vehicular Technology
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a preamble (PA) collision resolution (PACR) scheme
based on multiple timing advance (TA) values captured via tagged PAs. In the
proposed PACR scheme, tags are embedded in random access (RA) PAs and multiple
TA values are captured for a single detected PA during a tag detection
procedure. The proposed PACR scheme significantly improves RA success
probability for stationary machine nodes since the nodes using collided PAs can
successfully complete the corresponding RAs using exclusive data resource
blocks.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2017 01:53:56 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Aug 2017 02:06:37 GMT"
}
] | 2017-08-15T00:00:00 |
[
[
"Jang",
"Han Seung",
""
],
[
"Kim",
"Su Min",
""
],
[
"Park",
"Hong-Shik",
""
],
[
"Sung",
"Dan Keun",
""
]
] |
new_dataset
| 0.97757 |
1708.03655
|
James Tompkin
|
Eric Rosen and David Whitney and Elizabeth Phillips and Gary Chien and
James Tompkin and George Konidaris and Stefanie Tellex
|
Communicating Robot Arm Motion Intent Through Mixed Reality Head-mounted
Displays
| null | null | null | null |
cs.RO cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Efficient motion intent communication is necessary for safe and collaborative
work environments with collocated humans and robots. Humans efficiently
communicate their motion intent to other humans through gestures, gaze, and
social cues. However, robots often have difficulty efficiently communicating
their motion intent to humans via these methods. Many existing methods for
robot motion intent communication rely on 2D displays, which require the human
to continually pause their work and check a visualization. We propose a mixed
reality head-mounted display visualization of the proposed robot motion over
the wearer's real-world view of the robot and its environment. To evaluate the
effectiveness of this system against a 2D display visualization and against no
visualization, we asked 32 participants to labeled different robot arm motions
as either colliding or non-colliding with blocks on a table. We found a 16%
increase in accuracy with a 62% decrease in the time it took to complete the
task compared to the next best system. This demonstrates that a mixed-reality
HMD allows a human to more quickly and accurately tell where the robot is going
to move than the compared baselines.
|
[
{
"version": "v1",
"created": "Fri, 11 Aug 2017 18:28:02 GMT"
}
] | 2017-08-15T00:00:00 |
[
[
"Rosen",
"Eric",
""
],
[
"Whitney",
"David",
""
],
[
"Phillips",
"Elizabeth",
""
],
[
"Chien",
"Gary",
""
],
[
"Tompkin",
"James",
""
],
[
"Konidaris",
"George",
""
],
[
"Tellex",
"Stefanie",
""
]
] |
new_dataset
| 0.995765 |
1708.03669
|
Chris Tensmeyer
|
Chris Tensmeyer, Daniel Saunders, and Tony Martinez
|
Convolutional Neural Networks for Font Classification
|
ICDAR 2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Classifying pages or text lines into font categories aids transcription
because single font Optical Character Recognition (OCR) is generally more
accurate than omni-font OCR. We present a simple framework based on
Convolutional Neural Networks (CNNs), where a CNN is trained to classify small
patches of text into predefined font classes. To classify page or line images,
we average the CNN predictions over densely extracted patches. We show that
this method achieves state-of-the-art performance on a challenging dataset of
40 Arabic computer fonts with 98.8\% line level accuracy. This same method also
achieves the highest reported accuracy of 86.6% in predicting paleographic
scribal script classes at the page level on medieval Latin manuscripts.
Finally, we analyze what features are learned by the CNN on Latin manuscripts
and find evidence that the CNN is learning both the defining morphological
differences between scribal script classes as well as overfitting to
class-correlated nuisance factors. We propose a novel form of data augmentation
that improves robustness to text darkness, further increasing classification
performance.
|
[
{
"version": "v1",
"created": "Fri, 11 Aug 2017 19:25:44 GMT"
}
] | 2017-08-15T00:00:00 |
[
[
"Tensmeyer",
"Chris",
""
],
[
"Saunders",
"Daniel",
""
],
[
"Martinez",
"Tony",
""
]
] |
new_dataset
| 0.957417 |
1708.03748
|
Nazim Haouchine
|
Nazim Haouchine, Frederick Roy, Hadrien Courtecuisse, Matthias
Nie{\ss}ner and Stephane Cotin
|
Calipso: Physics-based Image and Video Editing through CAD Model Proxies
|
11 pages
| null | null | null |
cs.GR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Calipso, an interactive method for editing images and videos in a
physically-coherent manner. Our main idea is to realize physics-based
manipulations by running a full physics simulation on proxy geometries given by
non-rigidly aligned CAD models. Running these simulations allows us to apply
new, unseen forces to move or deform selected objects, change physical
parameters such as mass or elasticity, or even add entire new objects that
interact with the rest of the underlying scene. In Calipso, the user makes
edits directly in 3D; these edits are processed by the simulation and then
transfered to the target 2D content using shape-to-image correspondences in a
photo-realistic rendering process. To align the CAD models, we introduce an
efficient CAD-to-image alignment procedure that jointly minimizes for rigid and
non-rigid alignment while preserving the high-level structure of the input
shape. Moreover, the user can choose to exploit image flow to estimate scene
motion, producing coherent physical behavior with ambient dynamics. We
demonstrate Calipso's physics-based editing on a wide range of examples
producing myriad physical behavior while preserving geometric and visual
consistency.
|
[
{
"version": "v1",
"created": "Sat, 12 Aug 2017 07:40:39 GMT"
}
] | 2017-08-15T00:00:00 |
[
[
"Haouchine",
"Nazim",
""
],
[
"Roy",
"Frederick",
""
],
[
"Courtecuisse",
"Hadrien",
""
],
[
"Nießner",
"Matthias",
""
],
[
"Cotin",
"Stephane",
""
]
] |
new_dataset
| 0.99922 |
1708.03778
|
George Danezis
|
Mustafa Al-Bassam, Alberto Sonnino, Shehar Bano, Dave Hrycyszyn and
George Danezis
|
Chainspace: A Sharded Smart Contracts Platform
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Chainspace is a decentralized infrastructure, known as a distributed ledger,
that supports user defined smart contracts and executes user-supplied
transactions on their objects. The correct execution of smart contract
transactions is verifiable by all. The system is scalable, by sharding state
and the execution of transactions, and using S-BAC, a distributed commit
protocol, to guarantee consistency. Chainspace is secure against subsets of
nodes trying to compromise its integrity or availability properties through
Byzantine Fault Tolerance (BFT), and extremely high-auditability,
non-repudiation and `blockchain' techniques. Even when BFT fails, auditing
mechanisms are in place to trace malicious participants. We present the design,
rationale, and details of Chainspace; we argue through evaluating an
implementation of the system about its scaling and other features; we
illustrate a number of privacy-friendly smart contracts for smart metering,
polling and banking and measure their performance.
|
[
{
"version": "v1",
"created": "Sat, 12 Aug 2017 13:24:10 GMT"
}
] | 2017-08-15T00:00:00 |
[
[
"Al-Bassam",
"Mustafa",
""
],
[
"Sonnino",
"Alberto",
""
],
[
"Bano",
"Shehar",
""
],
[
"Hrycyszyn",
"Dave",
""
],
[
"Danezis",
"George",
""
]
] |
new_dataset
| 0.998192 |
1708.03783
|
Ryo Suzuki
|
Ryo Suzuki, Abigale Stangl, Mark D. Gross, Tom Yeh
|
FluxMarker: Enhancing Tactile Graphics with Dynamic Tactile Markers
|
ASSETS 2017
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For people with visual impairments, tactile graphics are an important means
to learn and explore information. However, raised line tactile graphics created
with traditional materials such as embossing are static. While available
refreshable displays can dynamically change the content, they are still too
expensive for many users, and are limited in size. These factors limit
wide-spread adoption and the representation of large graphics or data sets. In
this paper, we present FluxMaker, an inexpensive scalable system that renders
dynamic information on top of static tactile graphics with movable tactile
markers. These dynamic tactile markers can be easily reconfigured and used to
annotate static raised line tactile graphics, including maps, graphs, and
diagrams. We developed a hardware prototype that actuates magnetic tactile
markers driven by low-cost and scalable electromagnetic coil arrays, which can
be fabricated with standard printed circuit board manufacturing. We evaluate
our prototype with six participants with visual impairments and found positive
results across four application areas: location finding or navigating on
tactile maps, data analysis, and physicalization, feature identification for
tactile graphics, and drawing support. The user study confirms advantages in
application domains such as education and data exploration.
|
[
{
"version": "v1",
"created": "Sat, 12 Aug 2017 14:25:50 GMT"
}
] | 2017-08-15T00:00:00 |
[
[
"Suzuki",
"Ryo",
""
],
[
"Stangl",
"Abigale",
""
],
[
"Gross",
"Mark D.",
""
],
[
"Yeh",
"Tom",
""
]
] |
new_dataset
| 0.979732 |
1708.03867
|
Qi Dou
|
Qi Dou, Hao Chen, Yueming Jin, Huangjing Lin, Jing Qin, Pheng-Ann Heng
|
Automated Pulmonary Nodule Detection via 3D ConvNets with Online Sample
Filtering and Hybrid-Loss Residual Learning
|
Accepted to MICCAI 2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel framework with 3D convolutional networks
(ConvNets) for automated detection of pulmonary nodules from low-dose CT scans,
which is a challenging yet crucial task for lung cancer early diagnosis and
treatment. Different from previous standard ConvNets, we try to tackle the
severe hard/easy sample imbalance problem in medical datasets and explore the
benefits of localized annotations to regularize the learning, and hence boost
the performance of ConvNets to achieve more accurate detections. Our proposed
framework consists of two stages: 1) candidate screening, and 2) false positive
reduction. In the first stage, we establish a 3D fully convolutional network,
effectively trained with an online sample filtering scheme, to sensitively and
rapidly screen the nodule candidates. In the second stage, we design a
hybrid-loss residual network which harnesses the location and size information
as important cues to guide the nodule recognition procedure. Experimental
results on the public large-scale LUNA16 dataset demonstrate superior
performance of our proposed method compared with state-of-the-art approaches
for the pulmonary nodule detection task.
|
[
{
"version": "v1",
"created": "Sun, 13 Aug 2017 07:33:55 GMT"
}
] | 2017-08-15T00:00:00 |
[
[
"Dou",
"Qi",
""
],
[
"Chen",
"Hao",
""
],
[
"Jin",
"Yueming",
""
],
[
"Lin",
"Huangjing",
""
],
[
"Qin",
"Jing",
""
],
[
"Heng",
"Pheng-Ann",
""
]
] |
new_dataset
| 0.964365 |
1708.03882
|
Raphael Jolly
|
Raphael Jolly
|
Monadic Remote Invocation
| null | null | null | null |
cs.PL cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to achieve Separation of Concerns in the domain of remote method
invocation, a small functional adapter is added atop Java RMI, eliminating the
need for every remote object to implement java.rmi.Remote and making it
possible to remotely access existing code, unchanged. The Remote monad is
introduced, and its implementation and usage are detailed. Reusing the
existing, proven technology of RMI allows not to re-invent the underlying
network protocol. As a result, orthogonal remote invocation is achieved with
little or no implementation effort.
|
[
{
"version": "v1",
"created": "Sun, 13 Aug 2017 10:29:06 GMT"
}
] | 2017-08-15T00:00:00 |
[
[
"Jolly",
"Raphael",
""
]
] |
new_dataset
| 0.969208 |
1708.03919
|
Mohammadali Mohammadi
|
Zahra Mobini, Mohammadali Mohammadi, Himal A. Suraweera, Zhiguo Ding
|
Full-duplex Multi-Antenna Relay Assisted Cooperative Non-Orthogonal
Multiple Access
|
Accepted for IEEE Global Communications Conference (GLOBECOM 2017)
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We consider a cooperative non-orthogonal multiple access (NOMA) network in
which a full-duplex (FD) multi-antenna relay assists transmission from a base
station (BS) to a set of far users with poor channel conditions, while at the
same time the BS transmits to a set of near users with strong channel
conditions. We assume imperfect self-interference (SI) cancellation at the FD
relay and imperfect inter-user interference cancellation at the near users. In
order to cancel the SI at the relay a zero-forcing based beamforming scheme is
used and the corresponding outage probability analysis of two user selection
strategies, namely random near user and random far user (RNRF), and nearest
near user and nearest far user (NNNF), are derived. Our finding suggests that
significant performance improvement can be achieved by using the FD
multi-antenna relay compared to the counterpart system with a half-duplex
relay. The achieved performance gain depends on network parameters such as the
user density, user zones, path loss and the strength of the inter-user
interference in case of near users. We also show that the NNNF strategy
exhibits a superior outage performance compared to the RNRF strategy,
especially in the case of near
|
[
{
"version": "v1",
"created": "Sun, 13 Aug 2017 14:54:07 GMT"
}
] | 2017-08-15T00:00:00 |
[
[
"Mobini",
"Zahra",
""
],
[
"Mohammadi",
"Mohammadali",
""
],
[
"Suraweera",
"Himal A.",
""
],
[
"Ding",
"Zhiguo",
""
]
] |
new_dataset
| 0.984839 |
1708.03986
|
Rong Gong
|
Rong Gong, Rafael Caro Repetto, Xavier Serra
|
Creating an A Cappella Singing Audio Dataset for Automatic Jingju
Singing Evaluation Research
|
4th International Digital Libraries for Musicology workshop (DLfM
2017), Shanghai, China
| null | null | null |
cs.SD cs.DL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The data-driven computational research on automatic jingju (also known as
Beijing or Peking opera) singing evaluation lacks a suitable and comprehensive
a cappella singing audio dataset. In this work, we present an a cappella
singing audio dataset which consists of 120 arias, accounting for 1265 melodic
lines. This dataset is also an extension our existing CompMusic jingju corpus.
Both professional and amateur singers were invited to the dataset recording
sessions, and the most common jingju musical elements have been covered. This
dataset is also accompanied by metadata per aria and melodic line annotated for
automatic singing evaluation research purpose. All the gathered data is openly
available online.
|
[
{
"version": "v1",
"created": "Mon, 14 Aug 2017 01:43:13 GMT"
}
] | 2017-08-15T00:00:00 |
[
[
"Gong",
"Rong",
""
],
[
"Repetto",
"Rafael Caro",
""
],
[
"Serra",
"Xavier",
""
]
] |
new_dataset
| 0.999765 |
1708.04006
|
Jiaolong Yang
|
Chen Zhou, Jiaolong Yang, Chunshui Zhao, Gang Hua
|
Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile
Robots
|
Appeared at IEEE CVPR 2017 Workshop on Embedded Vision
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Safety is paramount for mobile robotic platforms such as self-driving cars
and unmanned aerial vehicles. This work is devoted to a task that is
indispensable for safety yet was largely overlooked in the past -- detecting
obstacles that are of very thin structures, such as wires, cables and tree
branches. This is a challenging problem, as thin objects can be problematic for
active sensors such as lidar and sonar and even for stereo cameras. In this
work, we propose to use video sequences for thin obstacle detection. We
represent obstacles with edges in the video frames, and reconstruct them in 3D
using efficient edge-based visual odometry techniques. We provide both a
monocular camera solution and a stereo camera solution. The former incorporates
Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter
enjoys a novel, purely vision-based solution. Experiments demonstrated that the
proposed methods are fast and able to detect thin obstacles robustly and
accurately under various conditions.
|
[
{
"version": "v1",
"created": "Mon, 14 Aug 2017 04:35:04 GMT"
}
] | 2017-08-15T00:00:00 |
[
[
"Zhou",
"Chen",
""
],
[
"Yang",
"Jiaolong",
""
],
[
"Zhao",
"Chunshui",
""
],
[
"Hua",
"Gang",
""
]
] |
new_dataset
| 0.992108 |
1708.04078
|
Fang-Zhou Jiang
|
Fang-Zhou Jiang, Kanchana Thilakarathna, Sirine Mrabet, Mohamed Ali
Kaafar, and Aruna Seneviratne
|
uStash: a Novel Mobile Content Delivery System for Improving User QoE in
Public Transport
|
14 Pages
| null | null | null |
cs.NI cs.DC cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile data traffic is growing exponentially and it is even more challenging
to distribute content efficiently while users are "on the move" such as in
public transport.The use of mobile devices for accessing content (e.g. videos)
while commuting are both expensive and unreliable, although it is becoming
common practice worldwide. Leveraging on the spatial and temporal correlation
of content popularity and users' diverse network connectivity, we propose a
novel content distribution system, \textit{uStash}, which guarantees better QoE
with regards to access delays and cost of usage. The proposed collaborative
download and content stashing schemes provide the uStash provider the
flexibility to control the cost of content access via cellular networks. We
model the uStash system in a probabilistic framework and thereby analytically
derive the optimal portions for collaborative downloading. Then, we validate
the proposed models using real-life trace driven simulations. In particular, we
use dataset from 22 inter-city buses running on 6 different routes and from a
mobile VoD service provider to show that uStash reduces the cost of monthly
cellular data by approximately 50\% and the expected delay for content access
by 60\% compared to content downloaded via users' cellular network connections.
|
[
{
"version": "v1",
"created": "Mon, 14 Aug 2017 11:28:13 GMT"
}
] | 2017-08-15T00:00:00 |
[
[
"Jiang",
"Fang-Zhou",
""
],
[
"Thilakarathna",
"Kanchana",
""
],
[
"Mrabet",
"Sirine",
""
],
[
"Kaafar",
"Mohamed Ali",
""
],
[
"Seneviratne",
"Aruna",
""
]
] |
new_dataset
| 0.999359 |
1706.02095
|
Diego Molla-Aliod
|
Diego Molla-Aliod
|
Macquarie University at BioASQ 5b -- Query-based Summarisation
Techniques for Selecting the Ideal Answers
|
As published in BioNLP2017. 9 pages, 5 figures, 4 tables
|
Proceedings of the BioNLP 2017 Workshop (Vancouver, Canada), pages
67-75 (2017)
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Macquarie University's contribution to the BioASQ challenge (Task 5b Phase B)
focused on the use of query-based extractive summarisation techniques for the
generation of the ideal answers. Four runs were submitted, with approaches
ranging from a trivial system that selected the first $n$ snippets, to the use
of deep learning approaches under a regression framework. Our experiments and
the ROUGE results of the five test batches of BioASQ indicate surprisingly good
results for the trivial approach. Overall, most of our runs on the first three
test batches achieved the best ROUGE-SU4 results in the challenge.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2017 09:04:29 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Aug 2017 07:11:19 GMT"
}
] | 2017-08-14T00:00:00 |
[
[
"Molla-Aliod",
"Diego",
""
]
] |
new_dataset
| 0.957352 |
1708.03383
|
Fangting Xia
|
Fangting Xia, Peng Wang, Xianjie Chen, Alan Yuille
|
Joint Multi-Person Pose Estimation and Semantic Part Segmentation
|
This paper has been accepted by CVPR 2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human pose estimation and semantic part segmentation are two complementary
tasks in computer vision. In this paper, we propose to solve the two tasks
jointly for natural multi-person images, in which the estimated pose provides
object-level shape prior to regularize part segments while the part-level
segments constrain the variation of pose locations. Specifically, we first
train two fully convolutional neural networks (FCNs), namely Pose FCN and Part
FCN, to provide initial estimation of pose joint potential and semantic part
potential. Then, to refine pose joint location, the two types of potentials are
fused with a fully-connected conditional random field (FCRF), where a novel
segment-joint smoothness term is used to encourage semantic and spatial
consistency between parts and joints. To refine part segments, the refined pose
and the original part potential are integrated through a Part FCN, where the
skeleton feature from pose serves as additional regularization cues for part
segments. Finally, to reduce the complexity of the FCRF, we induce human
detection boxes and infer the graph inside each box, making the inference forty
times faster.
Since there's no dataset that contains both part segments and pose labels, we
extend the PASCAL VOC part dataset with human pose joints and perform extensive
experiments to compare our method against several most recent strategies. We
show that on this dataset our algorithm surpasses competing methods by a large
margin in both tasks.
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2017 20:59:31 GMT"
}
] | 2017-08-14T00:00:00 |
[
[
"Xia",
"Fangting",
""
],
[
"Wang",
"Peng",
""
],
[
"Chen",
"Xianjie",
""
],
[
"Yuille",
"Alan",
""
]
] |
new_dataset
| 0.9893 |
1708.03468
|
Thanh Bui
|
Thanh Bui and Tuomas Aura
|
Key exchange with the help of a public ledger
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blockchains and other public ledger structures promise a new way to create
globally consistent event logs and other records. We make use of this
consistency property to detect and prevent man-in-the-middle attacks in a key
exchange such as Diffie-Hellman or ECDH. Essentially, the MitM attack creates
an inconsistency in the world views of the two honest parties, and they can
detect it with the help of the ledger. Thus, there is no need for prior
knowledge or trusted third parties apart from the distributed ledger. To
prevent impersonation attacks, we require user interaction. It appears that, in
some applications, the required user interaction is reduced in comparison to
other user-assisted key-exchange protocols.
|
[
{
"version": "v1",
"created": "Fri, 11 Aug 2017 08:25:06 GMT"
}
] | 2017-08-14T00:00:00 |
[
[
"Bui",
"Thanh",
""
],
[
"Aura",
"Tuomas",
""
]
] |
new_dataset
| 0.995514 |
1607.05338
|
Joseph DeGol
|
Joseph DeGol, Mani Golparvar-Fard, Derek Hoiem
|
Geometry-Informed Material Recognition
|
IEEE Conference on Computer Vision and Pattern Recognition 2016 (CVPR
'16)
| null |
10.1109/CVPR.2016.172
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Our goal is to recognize material categories using images and geometry
information. In many applications, such as construction management, coarse
geometry information is available. We investigate how 3D geometry (surface
normals, camera intrinsic and extrinsic parameters) can be used with 2D
features (texture and color) to improve material classification. We introduce a
new dataset, GeoMat, which is the first to provide both image and geometry data
in the form of: (i) training and testing patches that were extracted at
different scales and perspectives from real world examples of each material
category, and (ii) a large scale construction site scene that includes 160
images and over 800,000 hand labeled 3D points. Our results show that using 2D
and 3D features both jointly and independently to model materials improves
classification accuracy across multiple scales and viewing directions for both
material patches and images of a large scale construction site scene.
|
[
{
"version": "v1",
"created": "Mon, 18 Jul 2016 22:15:49 GMT"
}
] | 2017-08-11T00:00:00 |
[
[
"DeGol",
"Joseph",
""
],
[
"Golparvar-Fard",
"Mani",
""
],
[
"Hoiem",
"Derek",
""
]
] |
new_dataset
| 0.998464 |
1702.05658
|
Chang Liu
|
Chang Liu, Fuchun Sun, Changhu Wang, Feng Wang, Alan Yuille
|
MAT: A Multimodal Attentive Translator for Image Captioning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we formulate the problem of image captioning as a multimodal
translation task. Analogous to machine translation, we present a
sequence-to-sequence recurrent neural networks (RNN) model for image caption
generation. Different from most existing work where the whole image is
represented by convolutional neural network (CNN) feature, we propose to
represent the input image as a sequence of detected objects which feeds as the
source sequence of the RNN model. In this way, the sequential representation of
an image can be naturally translated to a sequence of words, as the target
sequence of the RNN model. To represent the image in a sequential way, we
extract the objects features in the image and arrange them in a order using
convolutional neural networks. To further leverage the visual information from
the encoded objects, a sequential attention layer is introduced to selectively
attend to the objects that are related to generate corresponding words in the
sentences. Extensive experiments are conducted to validate the proposed
approach on popular benchmark dataset, i.e., MS COCO, and the proposed model
surpasses the state-of-the-art methods in all metrics following the dataset
splits of previous work. The proposed approach is also evaluated by the
evaluation server of MS COCO captioning challenge, and achieves very
competitive results, e.g., a CIDEr of 1.029 (c5) and 1.064 (c40).
|
[
{
"version": "v1",
"created": "Sat, 18 Feb 2017 21:35:06 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Jul 2017 18:39:02 GMT"
},
{
"version": "v3",
"created": "Thu, 10 Aug 2017 14:29:19 GMT"
}
] | 2017-08-11T00:00:00 |
[
[
"Liu",
"Chang",
""
],
[
"Sun",
"Fuchun",
""
],
[
"Wang",
"Changhu",
""
],
[
"Wang",
"Feng",
""
],
[
"Yuille",
"Alan",
""
]
] |
new_dataset
| 0.992943 |
1703.00754
|
Alejo Concha Belenguer
|
Alejo Concha and Javier Civera
|
RGBDTAM: A Cost-Effective and Accurate RGB-D Tracking and Mapping System
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Simultaneous Localization and Mapping using RGB-D cameras has been a fertile
research topic in the latest decade, due to the suitability of such sensors for
indoor robotics. In this paper we propose a direct RGB-D SLAM algorithm with
state-of-the-art accuracy and robustness at a los cost. Our experiments in the
RGB-D TUM dataset [34] effectively show a better accuracy and robustness in CPU
real time than direct RGB-D SLAM systems that make use of the GPU. The key
ingredients of our approach are mainly two. Firstly, the combination of a
semi-dense photometric and dense geometric error for the pose tracking (see
Figure 1), which we demonstrate to be the most accurate alternative. And
secondly, a model of the multi-view constraints and their errors in the mapping
and tracking threads, which adds extra information over other approaches. We
release the open-source implementation of our approach 1 . The reader is
referred to a video with our results 2 for a more illustrative visualization of
its performance.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2017 12:24:43 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Mar 2017 11:23:22 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Mar 2017 12:14:38 GMT"
},
{
"version": "v4",
"created": "Wed, 9 Aug 2017 21:01:32 GMT"
}
] | 2017-08-11T00:00:00 |
[
[
"Concha",
"Alejo",
""
],
[
"Civera",
"Javier",
""
]
] |
new_dataset
| 0.999676 |
1705.00360
|
Xuebin Qin
|
Xuebin Qin, Shida He, Camilo Perez Quintero, Abhineet Singh, Masood
Dehghan and Martin Jagersand
|
Real-Time Salient Closed Boundary Tracking via Line Segments Perceptual
Grouping
|
7 pages, 8 figures, The 2017 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2017) submission ID 1034
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel real-time method for tracking salient closed
boundaries from video image sequences. This method operates on a set of
straight line segments that are produced by line detection. The tracking scheme
is coherently integrated into a perceptual grouping framework in which the
visual tracking problem is tackled by identifying a subset of these line
segments and connecting them sequentially to form a closed boundary with the
largest saliency and a certain similarity to the previous one. Specifically, we
define a new tracking criterion which combines a grouping cost and an area
similarity constraint. The proposed criterion makes the resulting boundary
tracking more robust to local minima. To achieve real-time tracking
performance, we use Delaunay Triangulation to build a graph model with the
detected line segments and then reduce the tracking problem to finding the
optimal cycle in this graph. This is solved by our newly proposed closed
boundary candidates searching algorithm called "Bidirectional Shortest Path
(BDSP)". The efficiency and robustness of the proposed method are tested on
real video sequences as well as during a robot arm pouring experiment.
|
[
{
"version": "v1",
"created": "Sun, 30 Apr 2017 19:01:07 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Aug 2017 23:44:36 GMT"
}
] | 2017-08-11T00:00:00 |
[
[
"Qin",
"Xuebin",
""
],
[
"He",
"Shida",
""
],
[
"Quintero",
"Camilo Perez",
""
],
[
"Singh",
"Abhineet",
""
],
[
"Dehghan",
"Masood",
""
],
[
"Jagersand",
"Martin",
""
]
] |
new_dataset
| 0.999501 |
1707.03550
|
Yingjie Hu
|
Yingjie Hu
|
Geospatial Semantics
|
Yingjie Hu (2017). Geospatial Semantics. In Bo Huang, Thomas J. Cova,
and Ming-Hsiang Tsou et al. (Eds): Comprehensive Geographic Information
Systems, Elsevier. Oxford, UK
| null |
10.1016/B978-0-12-409548-9.09597-X
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Geospatial semantics is a broad field that involves a variety of research
areas. The term semantics refers to the meaning of things, and is in contrast
with the term syntactics. Accordingly, studies on geospatial semantics usually
focus on understanding the meaning of geographic entities as well as their
counterparts in the cognitive and digital world, such as cognitive geographic
concepts and digital gazetteers. Geospatial semantics can also facilitate the
design of geographic information systems (GIS) by enhancing the
interoperability of distributed systems and developing more intelligent
interfaces for user interactions. During the past years, a lot of research has
been conducted, approaching geospatial semantics from different perspectives,
using a variety of methods, and targeting different problems. Meanwhile, the
arrival of big geo data, especially the large amount of unstructured text data
on the Web, and the fast development of natural language processing methods
enable new research directions in geospatial semantics. This chapter,
therefore, provides a systematic review on the existing geospatial semantic
research. Six major research areas are identified and discussed, including
semantic interoperability, digital gazetteers, geographic information
retrieval, geospatial Semantic Web, place semantics, and cognitive geographic
concepts.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2017 05:41:06 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Aug 2017 05:40:49 GMT"
}
] | 2017-08-11T00:00:00 |
[
[
"Hu",
"Yingjie",
""
]
] |
new_dataset
| 0.992313 |
1708.02970
|
Tae-Hyun Oh
|
Tae-Hyun Oh, Kyungdon Joo, Neel Joshi, Baoyuan Wang, In So Kweon, Sing
Bing Kang
|
Personalized Cinemagraphs using Semantic Understanding and Collaborative
Learning
|
To appear in ICCV 2017. Total 17 pages including the supplementary
material
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cinemagraphs are a compelling way to convey dynamic aspects of a scene. In
these media, dynamic and still elements are juxtaposed to create an artistic
and narrative experience. Creating a high-quality, aesthetically pleasing
cinemagraph requires isolating objects in a semantically meaningful way and
then selecting good start times and looping periods for those objects to
minimize visual artifacts (such a tearing). To achieve this, we present a new
technique that uses object recognition and semantic segmentation as part of an
optimization method to automatically create cinemagraphs from videos that are
both visually appealing and semantically meaningful. Given a scene with
multiple objects, there are many cinemagraphs one could create. Our method
evaluates these multiple candidates and presents the best one, as determined by
a model trained to predict human preferences in a collaborative way. We
demonstrate the effectiveness of our approach with multiple results and a user
study.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2017 19:03:12 GMT"
}
] | 2017-08-11T00:00:00 |
[
[
"Oh",
"Tae-Hyun",
""
],
[
"Joo",
"Kyungdon",
""
],
[
"Joshi",
"Neel",
""
],
[
"Wang",
"Baoyuan",
""
],
[
"Kweon",
"In So",
""
],
[
"Kang",
"Sing Bing",
""
]
] |
new_dataset
| 0.998187 |
1708.02982
|
Joseph DeGol
|
Joseph DeGol and Timothy Bretl and Derek Hoiem
|
ChromaTag: A Colored Marker and Fast Detection Algorithm
|
International Conference on Computer Vision (ICCV '17)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current fiducial marker detection algorithms rely on marker IDs for false
positive rejection. Time is wasted on potential detections that will eventually
be rejected as false positives. We introduce ChromaTag, a fiducial marker and
detection algorithm designed to use opponent colors to limit and quickly reject
initial false detections and grayscale for precise localization. Through
experiments, we show that ChromaTag is significantly faster than current
fiducial markers while achieving similar or better detection accuracy. We also
show how tag size and viewing direction effect detection accuracy. Our
contribution is significant because fiducial markers are often used in
real-time applications (e.g. marker assisted robot navigation) where heavy
computation is required by other parts of the system.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2017 19:41:51 GMT"
}
] | 2017-08-11T00:00:00 |
[
[
"DeGol",
"Joseph",
""
],
[
"Bretl",
"Timothy",
""
],
[
"Hoiem",
"Derek",
""
]
] |
new_dataset
| 0.999476 |
1708.03151
|
Michael Saint-Guillain
|
Michael Saint-Guillain, Christine Solnon and Yves Deville
|
The Static and Stochastic VRPTW with both random Customers and Reveal
Times: algorithms and recourse strategies
|
Preprint version submitted to Transportation Research Part E
| null | null | null |
cs.AI cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unlike its deterministic counterpart, static and stochastic vehicle routing
problems (SS-VRP) aim at modeling and solving real-life operational problems by
considering uncertainty on data. We consider the SS-VRPTW-CR introduced in
Saint-Guillain et al. (2017). Like the SS-VRP introduced by Bertsimas (1992),
we search for optimal first stage routes for a fleet of vehicles to handle a
set of stochastic customer demands, i.e., demands are uncertain and we only
know their probabilities. In addition to capacity constraints, customer demands
are also constrained by time windows. Unlike all SS-VRP variants, the
SS-VRPTW-CR does not make any assumption on the time at which a stochastic
demand is revealed, i.e., the reveal time is stochastic as well. To handle this
new problem, we introduce waiting locations: Each vehicle is assigned a
sequence of waiting locations from which it may serve some associated demands,
and the objective is to minimize the expected number of demands that cannot be
satisfied in time. In this paper, we propose two new recourse strategies for
the SS-VRPTW-CR, together with their closed-form expressions for efficiently
computing their expectations: The first one allows us to take vehicle
capacities into account; The second one allows us to optimize routes by
avoiding some useless trips. We propose two algorithms for searching for routes
with optimal expected costs: The first one is an extended branch-and-cut
algorithm, based on a stochastic integer formulation, and the second one is a
local search based heuristic method. We also introduce a new public benchmark
for the SS-VRPTW-CR, based on real-world data coming from the city of Lyon. We
evaluate our two algorithms on this benchmark and empirically demonstrate the
expected superiority of the SS-VRPTW-CR anticipative actions over a basic
"wait-and-serve" policy.
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2017 10:20:01 GMT"
}
] | 2017-08-11T00:00:00 |
[
[
"Saint-Guillain",
"Michael",
""
],
[
"Solnon",
"Christine",
""
],
[
"Deville",
"Yves",
""
]
] |
new_dataset
| 0.989975 |
1708.03312
|
Yuanzhi Ke
|
Yuanzhi Ke and Masafumi Hagiwara
|
Radical-level Ideograph Encoder for RNN-based Sentiment Analysis of
Chinese and Japanese
|
12 pages, 4 figures
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The character vocabulary can be very large in non-alphabetic languages such
as Chinese and Japanese, which makes neural network models huge to process such
languages. We explored a model for sentiment classification that takes the
embeddings of the radicals of the Chinese characters, i.e, hanzi of Chinese and
kanji of Japanese. Our model is composed of a CNN word feature encoder and a
bi-directional RNN document feature encoder. The results achieved are on par
with the character embedding-based models, and close to the state-of-the-art
word embedding-based models, with 90% smaller vocabulary, and at least 13% and
80% fewer parameters than the character embedding-based models and word
embedding-based models respectively. The results suggest that the radical
embedding-based approach is cost-effective for machine learning on Chinese and
Japanese.
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2017 17:46:28 GMT"
}
] | 2017-08-11T00:00:00 |
[
[
"Ke",
"Yuanzhi",
""
],
[
"Hagiwara",
"Masafumi",
""
]
] |
new_dataset
| 0.999071 |
1204.0747
|
Anil Hirani
|
Anil N. Hirani, Kaushik Kalyanaraman, Evan B. VanderZee
|
Delaunay Hodge Star
|
Corrected error in Figure 1 (columns 3 and 4) and Figure 6 and a
formula error in Section 2. All mathematical statements (theorems and lemmas)
are unchanged. The previous arXiv version v3 (minus the Appendix) appeared in
the journal Computer-Aided Design
|
Computer-Aided Design, Volume 45, Issue 2, 2013, pages 540-544
|
10.1016/j.cad.2012.10.038
| null |
cs.CG math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We define signed dual volumes at all dimensions for circumcentric dual
meshes. We show that for pairwise Delaunay triangulations with mild boundary
assumptions these signed dual volumes are positive. This allows the use of such
Delaunay meshes for Discrete Exterior Calculus (DEC) because the discrete Hodge
star operator can now be correctly defined for such meshes. This operator is
crucial for DEC and is a diagonal matrix with the ratio of primal and dual
volumes along the diagonal. A correct definition requires that all entries be
positive. DEC is a framework for numerically solving differential equations on
meshes and for geometry processing tasks and has had considerable impact in
computer graphics and scientific computing. Our result allows the use of DEC
with a much larger class of meshes than was previously considered possible.
|
[
{
"version": "v1",
"created": "Tue, 3 Apr 2012 17:51:07 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Jun 2012 19:38:35 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Aug 2012 18:11:25 GMT"
},
{
"version": "v4",
"created": "Wed, 9 Aug 2017 15:02:08 GMT"
}
] | 2017-08-10T00:00:00 |
[
[
"Hirani",
"Anil N.",
""
],
[
"Kalyanaraman",
"Kaushik",
""
],
[
"VanderZee",
"Evan B.",
""
]
] |
new_dataset
| 0.983865 |
1612.08515
|
Anne-Kathrin Schmuck
|
Kaushik Mallik, Anne-Kathrin Schmuck, Sadegh Soudjani, Rupak Majumdar
|
Compositional Abstraction-Based Controller Synthesis for Continuous-Time
Systems
| null | null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Controller synthesis techniques for continuous systems with respect to
temporal logic specifications typically use a finite-state symbolic abstraction
of the system model. Constructing this abstraction for the entire system is
computationally expensive, and does not exploit natural decompositions of many
systems into interacting components. We describe a methodology for
compositional symbolic abstraction to help scale controller synthesis for
temporal logic to larger systems.
We introduce a new relation, called (approximate) disturbance bisimulation,
as the basis for compositional symbolic abstractions. Disturbance bisimulation
strengthens the standard approximate alternating bisimulation relation used in
control. It extends naturally to systems which are composed of weakly
interconnected sub-components possibly connected in feedback, and models the
coupling signals as disturbances. After proving this composability of
disturbance bisimulation for metric systems we apply this result to the
compositional abstraction of networks of input-to-state stable deterministic
non-linear control systems. We give conditions that allow to construct
finite-state abstractions compositionally for each component in such a network,
so that the abstractions are simultaneously disturbance bisimilar to their
continuous counterparts. Combining these two results, we show conditions under
which one can compositionally abstract a network of non-linear control systems
in a modular way while ensuring that the final composed abstraction is
disturbance bisimilar to the original system. We discuss how we get a
compositional abstraction-based controller synthesis methodology for networks
of such systems against local temporal specifications as a by-product of our
construction.
|
[
{
"version": "v1",
"created": "Tue, 27 Dec 2016 06:55:38 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Feb 2017 16:44:52 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Aug 2017 12:24:51 GMT"
}
] | 2017-08-10T00:00:00 |
[
[
"Mallik",
"Kaushik",
""
],
[
"Schmuck",
"Anne-Kathrin",
""
],
[
"Soudjani",
"Sadegh",
""
],
[
"Majumdar",
"Rupak",
""
]
] |
new_dataset
| 0.981995 |
1702.01275
|
Matias Korman
|
Alfredo Garc\'ia, Ferran Hurtado, Matias Korman, In\^es Matos, Maria
Saumell, Rodrigo I. Silveira, Javier Tejel, Csaba D. T\'oth
|
Geometric Biplane Graphs I: Maximal Graphs
| null |
Graphs and Combinatorics 31(2) (2015), 407-425
|
10.1007/s00373-015-1546-1
| null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study biplane graphs drawn on a finite planar point set $S$ in general
position. This is the family of geometric graphs whose vertex set is $S$ and
can be decomposed into two plane graphs. We show that two maximal biplane
graphs---in the sense that no edge can be added while staying biplane---may
differ in the number of edges, and we provide an efficient algorithm for adding
edges to a biplane graph to make it maximal. We also study extremal properties
of maximal biplane graphs such as the maximum number of edges and the largest
maximum connectivity over $n$-element point sets.
|
[
{
"version": "v1",
"created": "Sat, 4 Feb 2017 11:51:44 GMT"
}
] | 2017-08-10T00:00:00 |
[
[
"García",
"Alfredo",
""
],
[
"Hurtado",
"Ferran",
""
],
[
"Korman",
"Matias",
""
],
[
"Matos",
"Inês",
""
],
[
"Saumell",
"Maria",
""
],
[
"Silveira",
"Rodrigo I.",
""
],
[
"Tejel",
"Javier",
""
],
[
"Tóth",
"Csaba D.",
""
]
] |
new_dataset
| 0.999315 |
1702.01277
|
Matias Korman
|
Alfredo Garc\'ia, Ferran Hurtado, Matias Korman, In\^es Matos, Maria
Saumell, Rodrigo I. Silveira, Javier Tejel, Csaba D. T\'oth
|
Geometric Biplane Graphs II: Graph Augmentation
| null |
Graphs and Combinatorics 31(2) (2015), 427-452
|
10.1007/s00373-015-1547-0
| null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study biplane graphs drawn on a finite point set $S$ in the plane in
general position. This is the family of geometric graphs whose vertex set is
$S$ and which can be decomposed into two plane graphs. We show that every
sufficiently large point set admits a 5-connected biplane graph and that there
are arbitrarily large point sets that do not admit any 6-connected biplane
graph. Furthermore, we show that every plane graph (other than a wheel or a
fan) can be augmented into a 4-connected biplane graph. However, there are
arbitrarily large plane graphs that cannot be augmented to a 5-connected
biplane graph by adding pairwise noncrossing edges.
|
[
{
"version": "v1",
"created": "Sat, 4 Feb 2017 11:51:50 GMT"
}
] | 2017-08-10T00:00:00 |
[
[
"García",
"Alfredo",
""
],
[
"Hurtado",
"Ferran",
""
],
[
"Korman",
"Matias",
""
],
[
"Matos",
"Inês",
""
],
[
"Saumell",
"Maria",
""
],
[
"Silveira",
"Rodrigo I.",
""
],
[
"Tejel",
"Javier",
""
],
[
"Tóth",
"Csaba D.",
""
]
] |
new_dataset
| 0.999676 |
1702.05150
|
Zoya Bylinskii
|
Nam Wook Kim, Zoya Bylinskii, Michelle A. Borkin, Krzysztof Z. Gajos,
Aude Oliva, Fredo Durand, Hanspeter Pfister
|
BubbleView: an interface for crowdsourcing image importance maps and
tracking visual attention
| null |
TOCHI 2017
|
10.1145/3131275
| null |
cs.HC cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present BubbleView, an alternative methodology for eye
tracking using discrete mouse clicks to measure which information people
consciously choose to examine. BubbleView is a mouse-contingent, moving-window
interface in which participants are presented with a series of blurred images
and click to reveal "bubbles" - small, circular areas of the image at original
resolution, similar to having a confined area of focus like the eye fovea.
Across 10 experiments with 28 different parameter combinations, we evaluated
BubbleView on a variety of image types: information visualizations, natural
images, static webpages, and graphic designs, and compared the clicks to eye
fixations collected with eye-trackers in controlled lab settings. We found that
BubbleView clicks can both (i) successfully approximate eye fixations on
different images, and (ii) be used to rank image and design elements by
importance. BubbleView is designed to collect clicks on static images, and
works best for defined tasks such as describing the content of an information
visualization or measuring image importance. BubbleView data is cleaner and
more consistent than related methodologies that use continuous mouse movements.
Our analyses validate the use of mouse-contingent, moving-window methodologies
as approximating eye fixations for different image and task types.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2017 20:49:26 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jul 2017 23:37:19 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Aug 2017 14:23:54 GMT"
}
] | 2017-08-10T00:00:00 |
[
[
"Kim",
"Nam Wook",
""
],
[
"Bylinskii",
"Zoya",
""
],
[
"Borkin",
"Michelle A.",
""
],
[
"Gajos",
"Krzysztof Z.",
""
],
[
"Oliva",
"Aude",
""
],
[
"Durand",
"Fredo",
""
],
[
"Pfister",
"Hanspeter",
""
]
] |
new_dataset
| 0.957961 |
1707.03904
|
Bhuwan Dhingra
|
Bhuwan Dhingra, Kathryn Mazaitis and William W. Cohen
|
Quasar: Datasets for Question Answering by Search and Reading
| null | null | null | null |
cs.CL cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar .
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2017 20:53:26 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Aug 2017 01:48:08 GMT"
}
] | 2017-08-10T00:00:00 |
[
[
"Dhingra",
"Bhuwan",
""
],
[
"Mazaitis",
"Kathryn",
""
],
[
"Cohen",
"William W.",
""
]
] |
new_dataset
| 0.999851 |
1707.05853
|
Glorianna Jagfeld
|
Glorianna Jagfeld and Ngoc Thang Vu
|
Encoding Word Confusion Networks with Recurrent Neural Networks for
Dialog State Tracking
|
Speech-Centric Natural Language Processing Workshop @EMNLP 2017
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents our novel method to encode word confusion networks, which
can represent a rich hypothesis space of automatic speech recognition systems,
via recurrent neural networks. We demonstrate the utility of our approach for
the task of dialog state tracking in spoken dialog systems that relies on
automatic speech recognition output. Encoding confusion networks outperforms
encoding the best hypothesis of the automatic speech recognition in a neural
system for dialog state tracking on the well-known second Dialog State Tracking
Challenge dataset.
|
[
{
"version": "v1",
"created": "Tue, 18 Jul 2017 20:47:06 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Aug 2017 09:58:43 GMT"
}
] | 2017-08-10T00:00:00 |
[
[
"Jagfeld",
"Glorianna",
""
],
[
"Vu",
"Ngoc Thang",
""
]
] |
new_dataset
| 0.985495 |
1708.02654
|
EPTCS
|
Hans van Ditmarsch, Michael Ian Hartley, Barteld Kooi, Jonathan
Welton, Joseph B.W. Yeo
|
Cheryl's Birthday
|
In Proceedings TARK 2017, arXiv:1707.08250
|
EPTCS 251, 2017, pp. 1-9
|
10.4204/EPTCS.251.1
| null |
cs.AI cs.GL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present four logic puzzles and after that their solutions. Joseph Yeo
designed 'Cheryl's Birthday'. Mike Hartley came up with a novel solution for
'One Hundred Prisoners and a Light Bulb'. Jonathan Welton designed 'A Blind
Guess' and 'Abby's Birthday'. Hans van Ditmarsch and Barteld Kooi authored the
puzzlebook 'One Hundred Prisoners and a Light Bulb' that contains other
knowledge puzzles, and that can also be found on the webpage
http://personal.us.es/hvd/lightbulb.html dedicated to the book.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2017 07:44:49 GMT"
}
] | 2017-08-10T00:00:00 |
[
[
"van Ditmarsch",
"Hans",
""
],
[
"Hartley",
"Michael Ian",
""
],
[
"Kooi",
"Barteld",
""
],
[
"Welton",
"Jonathan",
""
],
[
"Yeo",
"Joseph B. W.",
""
]
] |
new_dataset
| 0.999822 |
1708.02681
|
He Zhang
|
He Zhang, Vishal M. Patel, Benjamin S. Riggan, and Shuowen Hu
|
Generative Adversarial Network-based Synthesis of Visible Faces from
Polarimetric Thermal Faces
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The large domain discrepancy between faces captured in polarimetric (or
conventional) thermal and visible domain makes cross-domain face recognition
quite a challenging problem for both human-examiners and computer vision
algorithms. Previous approaches utilize a two-step procedure (visible feature
estimation and visible image reconstruction) to synthesize the visible image
given the corresponding polarimetric thermal image. However, these are regarded
as two disjoint steps and hence may hinder the performance of visible face
reconstruction. We argue that joint optimization would be a better way to
reconstruct more photo-realistic images for both computer vision algorithms and
human-examiners to examine. To this end, this paper proposes a Generative
Adversarial Network-based Visible Face Synthesis (GAN-VFS) method to synthesize
more photo-realistic visible face images from their corresponding polarimetric
images. To ensure that the encoded visible-features contain more semantically
meaningful information in reconstructing the visible face image, a guidance
sub-network is involved into the training procedure. To achieve photo realistic
property while preserving discriminative characteristics for the reconstructed
outputs, an identity loss combined with the perceptual loss are optimized in
the framework. Multiple experiments evaluated on different experimental
protocols demonstrate that the proposed method achieves state-of-the-art
performance.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2017 23:57:12 GMT"
}
] | 2017-08-10T00:00:00 |
[
[
"Zhang",
"He",
""
],
[
"Patel",
"Vishal M.",
""
],
[
"Riggan",
"Benjamin S.",
""
],
[
"Hu",
"Shuowen",
""
]
] |
new_dataset
| 0.979323 |
1708.02765
|
Pavel Kucherbaev
|
Pavel Kucherbaev, Nava Tintarev, Carlos Rodriguez
|
Ephemeral Context to Support Robust and Diverse Music Recommendations
|
3 pages, 1 figure, Machine Learning for Music Discovery workshop at
ICML2017
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While prior work on context-based music recommendation focused on fixed set
of contexts (e.g. walking, driving, jogging), we propose to use multiple
sensors and external data sources to describe momentary (ephemeral) context in
a rich way with a very large number of possible states (e.g. jogging fast along
in downtown of Sydney under a heavy rain at night being tired and angry). With
our approach, we address the problems which current approaches face: 1) a
limited ability to infer context from missing or faulty sensor data; 2) an
inability to use contextual information to support novel content discovery.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2017 09:00:03 GMT"
}
] | 2017-08-10T00:00:00 |
[
[
"Kucherbaev",
"Pavel",
""
],
[
"Tintarev",
"Nava",
""
],
[
"Rodriguez",
"Carlos",
""
]
] |
new_dataset
| 0.997572 |
1708.02813
|
Nikita Dvornik
|
Nikita Dvornik, Konstantin Shmelkov, Julien Mairal, Cordelia Schmid
|
BlitzNet: A Real-Time Deep Network for Scene Understanding
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Real-time scene understanding has become crucial in many applications such as
autonomous driving. In this paper, we propose a deep architecture, called
BlitzNet, that jointly performs object detection and semantic segmentation in
one forward pass, allowing real-time computations. Besides the computational
gain of having a single network to perform several tasks, we show that object
detection and semantic segmentation benefit from each other in terms of
accuracy. Experimental results for VOC and COCO datasets show state-of-the-art
performance for object detection and segmentation among real time systems.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2017 12:36:17 GMT"
}
] | 2017-08-10T00:00:00 |
[
[
"Dvornik",
"Nikita",
""
],
[
"Shmelkov",
"Konstantin",
""
],
[
"Mairal",
"Julien",
""
],
[
"Schmid",
"Cordelia",
""
]
] |
new_dataset
| 0.998245 |
1708.02837
|
Pedro F. Proen\c{c}a
|
Pedro F. Proen\c{c}a and Yang Gao
|
SPLODE: Semi-Probabilistic Point and Line Odometry with Depth Estimation
from RGB-D Camera Motion
|
IROS 2017
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Active depth cameras suffer from several limitations, which cause incomplete
and noisy depth maps, and may consequently affect the performance of RGB-D
Odometry. To address this issue, this paper presents a visual odometry method
based on point and line features that leverages both measurements from a depth
sensor and depth estimates from camera motion. Depth estimates are generated
continuously by a probabilistic depth estimation framework for both types of
features to compensate for the lack of depth measurements and inaccurate
feature depth associations. The framework models explicitly the uncertainty of
triangulating depth from both point and line observations to validate and
obtain precise estimates. Furthermore, depth measurements are exploited by
propagating them through a depth map registration module and using a
frame-to-frame motion estimation method that considers 3D-to-2D and 2D-to-3D
reprojection errors, independently. Results on RGB-D sequences captured on
large indoor and outdoor scenes, where depth sensor limitations are critical,
show that the combination of depth measurements and estimates through our
approach is able to overcome the absence and inaccuracy of depth measurements.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2017 13:50:30 GMT"
}
] | 2017-08-10T00:00:00 |
[
[
"Proença",
"Pedro F.",
""
],
[
"Gao",
"Yang",
""
]
] |
new_dataset
| 0.994639 |
1708.02862
|
Wen Li
|
Wen Li, Limin Wang, Wei Li, Eirikur Agustsson, Luc Van Gool
|
WebVision Database: Visual Learning and Understanding from Web Data
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a study on learning visual recognition models from
large scale noisy web data. We build a new database called WebVision, which
contains more than $2.4$ million web images crawled from the Internet by using
queries generated from the 1,000 semantic concepts of the benchmark ILSVRC 2012
dataset. Meta information along with those web images (e.g., title,
description, tags, etc.) are also crawled. A validation set and test set
containing human annotated images are also provided to facilitate algorithmic
development. Based on our new database, we obtain a few interesting
observations: 1) the noisy web images are sufficient for training a good deep
CNN model for visual recognition; 2) the model learnt from our WebVision
database exhibits comparable or even better generalization ability than the one
trained from the ILSVRC 2012 dataset when being transferred to new datasets and
tasks; 3) a domain adaptation issue (a.k.a., dataset bias) is observed, which
means the dataset can be used as the largest benchmark dataset for visual
domain adaptation. Our new WebVision database and relevant studies in this work
would benefit the advance of learning state-of-the-art visual models with
minimum supervision based on web data.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2017 14:59:30 GMT"
}
] | 2017-08-10T00:00:00 |
[
[
"Li",
"Wen",
""
],
[
"Wang",
"Limin",
""
],
[
"Li",
"Wei",
""
],
[
"Agustsson",
"Eirikur",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.999386 |
1708.02912
|
Tharindu Weerasooriya
|
Tharindu Weerasooriya, Nandula Perera and S.R. Liyanage
|
KeyXtract Twitter Model - An Essential Keywords Extraction Model for
Twitter Designed using NLP Tools
|
7 Pages, 5 Figures, Proceedings of the 10th KDU International
Research Conference
| null | null | null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Since a tweet is limited to 140 characters, it is ambiguous and difficult for
traditional Natural Language Processing (NLP) tools to analyse. This research
presents KeyXtract which enhances the machine learning based Stanford CoreNLP
Part-of-Speech (POS) tagger with the Twitter model to extract essential
keywords from a tweet. The system was developed using rule-based parsers and
two corpora. The data for the research was obtained from a Twitter profile of a
telecommunication company. The system development consisted of two stages. At
the initial stage, a domain specific corpus was compiled after analysing the
tweets. The POS tagger extracted the Noun Phrases and Verb Phrases while the
parsers removed noise and extracted any other keywords missed by the POS
tagger. The system was evaluated using the Turing Test. After it was tested and
compared against Stanford CoreNLP, the second stage of the system was developed
addressing the shortcomings of the first stage. It was enhanced using Named
Entity Recognition and Lemmatization. The second stage was also tested using
the Turing test and its pass rate increased from 50.00% to 83.33%. The
performance of the final system output was measured using the F1 score.
Stanford CoreNLP with the Twitter model had an average F1 of 0.69 while the
improved system had a F1 of 0.77. The accuracy of the system could be improved
by using a complete domain specific corpus. Since the system used linguistic
features of a sentence, it could be applied to other NLP tools.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2017 17:04:34 GMT"
}
] | 2017-08-10T00:00:00 |
[
[
"Weerasooriya",
"Tharindu",
""
],
[
"Perera",
"Nandula",
""
],
[
"Liyanage",
"S. R.",
""
]
] |
new_dataset
| 0.977991 |
1708.02921
|
Johan P. Hansen
|
Johan P. Hansen
|
Asymmetric Quantum Codes on Toric Surfaces
|
9 pages
| null | null | null |
cs.CR math.AG quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Asymmetric quantum error-correcting codes are quantum codes defined over
biased quantum channels: qubit-flip and phase-shift errors may have equal or
different probabilities. The code construction is the Calderbank-Shor-Steane
construction based on two linear codes.
We present families of toric surfaces, toric codes and associated asymmetric
quantum error-correcting codes.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2017 17:34:37 GMT"
}
] | 2017-08-10T00:00:00 |
[
[
"Hansen",
"Johan P.",
""
]
] |
new_dataset
| 0.99975 |
1412.0305
|
Swastik Kopparty
|
Alan Guo, Swastik Kopparty
|
List-decoding algorithms for lifted codes
|
15 pages, no figures. Revision expands the proof of the main
technical theorem, Theorem 3.2
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lifted Reed-Solomon codes are a natural affine-invariant family of
error-correcting codes which generalize Reed-Muller codes. They were known to
have efficient local-testing and local-decoding algorithms (comparable to the
known algorithms for Reed-Muller codes), but with significantly better rate. We
give efficient algorithms for list-decoding and local list-decoding of lifted
codes. Our algorithms are based on a new technical lemma, which says that
codewords of lifted codes are low degree polynomials when viewed as univariate
polynomials over a big field (even though they may be very high degree when
viewed as multivariate polynomials over a small field).
|
[
{
"version": "v1",
"created": "Sun, 30 Nov 2014 23:29:17 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Aug 2017 00:21:51 GMT"
}
] | 2017-08-09T00:00:00 |
[
[
"Guo",
"Alan",
""
],
[
"Kopparty",
"Swastik",
""
]
] |
new_dataset
| 0.997047 |
1602.00970
|
Paolo Napoletano
|
Paolo Napoletano
|
Visual descriptors for content-based retrieval of remote sensing images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present an extensive evaluation of visual descriptors for
the content-based retrieval of remote sensing (RS) images. The evaluation
includes global hand-crafted, local hand-crafted, and Convolutional Neural
Network (CNNs) features coupled with four different Content-Based Image
Retrieval schemes. We conducted all the experiments on two publicly available
datasets: the 21-class UC Merced Land Use/Land Cover (LandUse) dataset and
19-class High-resolution Satellite Scene dataset (SceneSat). The content of RS
images might be quite heterogeneous, ranging from images containing fine
grained textures, to coarse grained ones or to images containing objects. It is
therefore not obvious in this domain, which descriptor should be employed to
describe images having such a variability. Results demonstrate that CNN-based
features perform better than both global and and local hand-crafted features
whatever is the retrieval scheme adopted. Features extracted from SatResNet-50,
a residual CNN suitable fine-tuned on the RS domain, shows much better
performance than a residual CNN pre-trained on multimedia scene and object
images. Features extracted from NetVLAD, a CNN that considers both CNN and
local features, works better than others CNN solutions on those images that
contain fine-grained textures and objects.
|
[
{
"version": "v1",
"created": "Tue, 2 Feb 2016 15:19:16 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Sep 2016 11:28:46 GMT"
},
{
"version": "v3",
"created": "Wed, 18 Jan 2017 11:18:52 GMT"
},
{
"version": "v4",
"created": "Mon, 6 Feb 2017 17:58:37 GMT"
},
{
"version": "v5",
"created": "Tue, 8 Aug 2017 09:36:07 GMT"
}
] | 2017-08-09T00:00:00 |
[
[
"Napoletano",
"Paolo",
""
]
] |
new_dataset
| 0.996629 |
1602.03729
|
Fabrizio Montesi
|
Lu\'is Cruz-Filipe and Fabrizio Montesi
|
A Language for the Declarative Composition of Concurrent Protocols
| null | null |
10.1007/978-3-319-60225-7_7
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A recent study of bugs in real-world concurrent and distributed systems found
that, while implementations of individual protocols tend to be robust, the
composition of multiple protocols and its interplay with internal computation
is the culprit for most errors. Multiparty Session Types and Choreographic
Programming are methodologies for developing correct-by-construction concurrent
and distributed software, based on global descriptions of communication flows.
However, protocol composition is either limited or left unchecked. Inspired by
these two methodologies, in this work we present a new language model for the
safe composition of protocols, called Procedural Choreographies (PC). Protocols
in PC are procedures, parameterised on the processes that enact them.
Procedures define communications declaratively using global descriptions, and
programs are written by invoking and composing these procedures. An
implementation in terms of a process model is then mechanically synthesised,
guaranteeing correctness and deadlock-freedom. We study PC in the settings of
synchronous and asynchronous communications, and illustrate its expressivity
with some representative examples.
|
[
{
"version": "v1",
"created": "Thu, 11 Feb 2016 14:02:09 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Oct 2016 06:38:19 GMT"
}
] | 2017-08-09T00:00:00 |
[
[
"Cruz-Filipe",
"Luís",
""
],
[
"Montesi",
"Fabrizio",
""
]
] |
new_dataset
| 0.991442 |
1611.07688
|
Paolo Napoletano
|
Daniela Micucci, Marco Mobilio, Paolo Napoletano
|
UniMiB SHAR: a new dataset for human activity recognition using
acceleration data from smartphones
|
submitted to MDPI Sensors
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Smartphones, smartwatches, fitness trackers, and ad-hoc wearable devices are
being increasingly used to monitor human activities. Data acquired by the
hosted sensors are usually processed by machine-learning-based algorithms to
classify human activities. The success of those algorithms mostly depends on
the availability of training (labeled) data that, if made publicly available,
would allow researchers to make objective comparisons between techniques.
Nowadays, publicly available data sets are few, often contain samples from
subjects with too similar characteristics, and very often lack of specific
information so that is not possible to select subsets of samples according to
specific criteria. In this article, we present a new dataset of acceleration
samples acquired with an Android smartphone designed for human activity
recognition and fall detection. The dataset includes 11,771 samples of both
human activities and falls performed by 30 subjects of ages ranging from 18 to
60 years. Samples are divided in 17 fine grained classes grouped in two coarse
grained classes: one containing samples of 9 types of activities of daily
living (ADL) and the other containing samples of 8 types of falls. The dataset
has been stored to include all the information useful to select samples
according to different criteria, such as the type of ADL, the age, the gender,
and so on. Finally, the dataset has been benchmarked with four different
classifiers and with two different feature vectors. We evaluated four different
classification tasks: fall vs no fall, 9 activities, 8 falls, 17 activities and
falls. For each classification task we performed a subject-dependent and
independent evaluation. The major findings of the evaluation are the following:
i) it is more difficult to distinguish between types of falls than types of
activities; ii) subject-dependent evaluation outperforms the
subject-independent one
|
[
{
"version": "v1",
"created": "Wed, 23 Nov 2016 08:45:49 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Feb 2017 09:53:23 GMT"
},
{
"version": "v3",
"created": "Sun, 4 Jun 2017 17:40:49 GMT"
},
{
"version": "v4",
"created": "Fri, 14 Jul 2017 14:19:03 GMT"
},
{
"version": "v5",
"created": "Tue, 8 Aug 2017 09:55:41 GMT"
}
] | 2017-08-09T00:00:00 |
[
[
"Micucci",
"Daniela",
""
],
[
"Mobilio",
"Marco",
""
],
[
"Napoletano",
"Paolo",
""
]
] |
new_dataset
| 0.999476 |
1707.05754
|
Dingzeyu Li
|
Dingzeyu Li, Avinash S. Nair, Shree K. Nayar, Changxi Zheng
|
AirCode: Unobtrusive Physical Tags for Digital Fabrication
|
ACM UIST 2017 Technical Papers
| null |
10.1145/3126594.3126635
| null |
cs.HC cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present AirCode, a technique that allows the user to tag physically
fabricated objects with given information. An AirCode tag consists of a group
of carefully designed air pockets placed beneath the object surface. These air
pockets are easily produced during the fabrication process of the object,
without any additional material or postprocessing. Meanwhile, the air pockets
affect only the scattering light transport under the surface, and thus are hard
to notice to our naked eyes. But, by using a computational imaging method, the
tags become detectable. We present a tool that automates the design of air
pockets for the user to encode information. AirCode system also allows the user
to retrieve the information from captured images via a robust decoding
algorithm. We demonstrate our tagging technique with applications for metadata
embedding, robotic grasping, as well as conveying object affordances.
|
[
{
"version": "v1",
"created": "Tue, 18 Jul 2017 17:27:16 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Aug 2017 22:34:53 GMT"
}
] | 2017-08-09T00:00:00 |
[
[
"Li",
"Dingzeyu",
""
],
[
"Nair",
"Avinash S.",
""
],
[
"Nayar",
"Shree K.",
""
],
[
"Zheng",
"Changxi",
""
]
] |
new_dataset
| 0.999837 |
1708.01318
|
Khanh Nguyen
|
Amr Sharaf, Shi Feng, Khanh Nguyen, Kiant\'e Brantley, Hal Daum\'e III
|
The UMD Neural Machine Translation Systems at WMT17 Bandit Learning Task
|
7 pages, 1 figure, WMT 2017 Bandit Learning Task
| null | null | null |
cs.CL cs.AI cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe the University of Maryland machine translation systems submitted
to the WMT17 German-English Bandit Learning Task. The task is to adapt a
translation system to a new domain, using only bandit feedback: the system
receives a German sentence to translate, produces an English sentence, and only
gets a scalar score as feedback. Targeting these two challenges (adaptation and
bandit learning), we built a standard neural machine translation system and
extended it in two ways: (1) robust reinforcement learning techniques to learn
effectively from the bandit feedback, and (2) domain adaptation using data
selection from a large corpus of parallel data.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2017 21:42:46 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Aug 2017 20:45:50 GMT"
}
] | 2017-08-09T00:00:00 |
[
[
"Sharaf",
"Amr",
""
],
[
"Feng",
"Shi",
""
],
[
"Nguyen",
"Khanh",
""
],
[
"Brantley",
"Kianté",
""
],
[
"Daumé",
"Hal",
"III"
]
] |
new_dataset
| 0.983202 |
1708.02274
|
Burak Pak
|
Burak Pak, Alvin Chua, Andrew Vande Moere
|
FixMyStreet Brussels: Socio-Demographic Inequality in Crowdsourced Civic
Participation
| null |
Journal of Urban Technology 2017 Volume 24 No 2 65 to 87
|
10.1080/10630732.2016.1270047
| null |
cs.CY cs.HC cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
FixMyStreet (FMS) is a web-based civic participation platform that allows
inhabitants to report environmental defects like potholes and damaged pavements
to the government. In this paper, we examine the use of FMS in Brussels, the
capital city of Belgium. Analyzing a total of 30,041 reports since its
inception in 2013, we demonstrate how civic participation on FMS varies between
the ethnically diverse districts in Brussels. We compare FMS use to a range of
sociodemographic indicators derived from official city statistics as well as
geotagged social media data from Twitter. Our statistical analysis revealed
several significant differences between the districts that suggested that
crowdsourced civic participation platforms tend to marginalize low-income and
ethnically diverse communities. In this respect, our findings provide timely
evidence to inform the design of more inclusive crowdsourced, civic
participation platforms in the future.
|
[
{
"version": "v1",
"created": "Mon, 7 Aug 2017 19:24:36 GMT"
}
] | 2017-08-09T00:00:00 |
[
[
"Pak",
"Burak",
""
],
[
"Chua",
"Alvin",
""
],
[
"Moere",
"Andrew Vande",
""
]
] |
new_dataset
| 0.999427 |
1708.02322
|
Sanchit Alekh
|
Sanchit Alekh
|
Automatic Raga Recognition in Hindustani Classical Music
|
Seminar on Computer Music, RWTH Aachen,
http://hpac.rwth-aachen.de/teaching/sem-mus-17/Reports/Alekh.pdf
| null | null | null |
cs.SD
|
http://creativecommons.org/licenses/by/4.0/
|
Raga is the central melodic concept in Hindustani Classical Music. It has a
complex structure, often characterized by pathos. In this paper, we describe a
technique for Automatic Raga Recognition, based on pitch distributions. We are
able to successfully classify ragas with a commendable accuracy on our test
dataset.
|
[
{
"version": "v1",
"created": "Mon, 7 Aug 2017 22:18:55 GMT"
}
] | 2017-08-09T00:00:00 |
[
[
"Alekh",
"Sanchit",
""
]
] |
new_dataset
| 0.999672 |
1708.02512
|
Daniele Cono D'Elia
|
Daniele Cono D'Elia, Camil Demetrescu
|
On-Stack Replacement \`a la Carte
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
On-stack replacement (OSR) dynamically transfers execution between different
code versions. This mechanism is used in mainstream runtime systems to support
adaptive and speculative optimizations by running code tailored to provide the
best expected performance for the actual workload. Current approaches either
restrict the program points where OSR can be fired or require complex
optimization-specific operations to realign the program's state during a
transition. The engineering effort to implement OSR and the lack of
abstractions make it rarely accessible to the research community, leaving
fundamental question regarding its flexibility largely unexplored.
In this article we make a first step towards a provably sound abstract
framework for OSR. We show that compiler optimizations can be made OSR-aware in
isolation, and then safely composed. We identify a class of transformations,
which we call live-variable equivalent (LVE), that captures a natural property
of fundamental compiler optimizations, and devise an algorithm to automatically
generate the OSR machinery required for an LVE transition at arbitrary program
locations.
We present an implementation of our ideas in LLVM and evaluate it against
prominent benchmarks, showing that bidirectional OSR transitions are possible
almost everywhere in the code in the presence of common, unhindered global
optimizations. We then discuss the end-to-end utility of our techniques in
source-level debugging of optimized code, showing how our algorithms can
provide novel building blocks for debuggers for both executables and managed
runtimes.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2017 15:03:01 GMT"
}
] | 2017-08-09T00:00:00 |
[
[
"D'Elia",
"Daniele Cono",
""
],
[
"Demetrescu",
"Camil",
""
]
] |
new_dataset
| 0.96164 |
1608.07022
|
Mingyu Xiao
|
Mingyu Xiao and Shaowei Kou
|
Kernelization and Parameterized Algorithms for 3-Path Vertex Cover
|
in TAMC 2016, LNCS 9796, 2016
|
TAMC 2017, LNCS 10185, 654-668
|
10.1007/978-3-319-55911-7_47
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A 3-path vertex cover in a graph is a vertex subset $C$ such that every path
of three vertices contains at least one vertex from $C$. The parameterized
3-path vertex cover problem asks whether a graph has a 3-path vertex cover of
size at most $k$. In this paper, we give a kernel of $5k$ vertices and an
$O^*(1.7485^k)$-time and polynomial-space algorithm for this problem, both new
results improve previous known bounds.
|
[
{
"version": "v1",
"created": "Thu, 25 Aug 2016 06:16:32 GMT"
}
] | 2017-08-08T00:00:00 |
[
[
"Xiao",
"Mingyu",
""
],
[
"Kou",
"Shaowei",
""
]
] |
new_dataset
| 0.955774 |
1612.03242
|
Han Zhang
|
Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang,
Xiaolei Huang, Dimitris Metaxas
|
StackGAN: Text to Photo-realistic Image Synthesis with Stacked
Generative Adversarial Networks
|
ICCV 2017 Oral Presentation
| null | null | null |
cs.CV cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Synthesizing high-quality images from text descriptions is a challenging
problem in computer vision and has many practical applications. Samples
generated by existing text-to-image approaches can roughly reflect the meaning
of the given descriptions, but they fail to contain necessary details and vivid
object parts. In this paper, we propose Stacked Generative Adversarial Networks
(StackGAN) to generate 256x256 photo-realistic images conditioned on text
descriptions. We decompose the hard problem into more manageable sub-problems
through a sketch-refinement process. The Stage-I GAN sketches the primitive
shape and colors of the object based on the given text description, yielding
Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text
descriptions as inputs, and generates high-resolution images with
photo-realistic details. It is able to rectify defects in Stage-I results and
add compelling details with the refinement process. To improve the diversity of
the synthesized images and stabilize the training of the conditional-GAN, we
introduce a novel Conditioning Augmentation technique that encourages
smoothness in the latent conditioning manifold. Extensive experiments and
comparisons with state-of-the-arts on benchmark datasets demonstrate that the
proposed method achieves significant improvements on generating photo-realistic
images conditioned on text descriptions.
|
[
{
"version": "v1",
"created": "Sat, 10 Dec 2016 03:11:37 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Aug 2017 02:18:21 GMT"
}
] | 2017-08-08T00:00:00 |
[
[
"Zhang",
"Han",
""
],
[
"Xu",
"Tao",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Zhang",
"Shaoting",
""
],
[
"Wang",
"Xiaogang",
""
],
[
"Huang",
"Xiaolei",
""
],
[
"Metaxas",
"Dimitris",
""
]
] |
new_dataset
| 0.97947 |
1704.02206
|
Oggi Rudovic
|
Dieu Linh Tran, Robert Walecki, Ognjen Rudovic, Stefanos
Eleftheriadis, Bj{\o}rn Schuller and Maja Pantic
|
DeepCoder: Semi-parametric Variational Autoencoders for Automatic Facial
Action Coding
|
ICCV 2017 - accepted
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human face exhibits an inherent hierarchy in its representations (i.e.,
holistic facial expressions can be encoded via a set of facial action units
(AUs) and their intensity). Variational (deep) auto-encoders (VAE) have shown
great results in unsupervised extraction of hierarchical latent representations
from large amounts of image data, while being robust to noise and other
undesired artifacts. Potentially, this makes VAEs a suitable approach for
learning facial features for AU intensity estimation. Yet, most existing
VAE-based methods apply classifiers learned separately from the encoded
features. By contrast, the non-parametric (probabilistic) approaches, such as
Gaussian Processes (GPs), typically outperform their parametric counterparts,
but cannot deal easily with large amounts of data. To this end, we propose a
novel VAE semi-parametric modeling framework, named DeepCoder, which combines
the modeling power of parametric (convolutional) and nonparametric (ordinal
GPs) VAEs, for joint learning of (1) latent representations at multiple levels
in a task hierarchy1, and (2) classification of multiple ordinal outputs. We
show on benchmark datasets for AU intensity estimation that the proposed
DeepCoder outperforms the state-of-the-art approaches, and related VAEs and
deep learning models.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2017 16:23:56 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Aug 2017 04:26:08 GMT"
}
] | 2017-08-08T00:00:00 |
[
[
"Tran",
"Dieu Linh",
""
],
[
"Walecki",
"Robert",
""
],
[
"Rudovic",
"Ognjen",
""
],
[
"Eleftheriadis",
"Stefanos",
""
],
[
"Schuller",
"Bjørn",
""
],
[
"Pantic",
"Maja",
""
]
] |
new_dataset
| 0.964295 |
1705.03943
|
Kumar Vijay Mishra
|
Kumar Vijay Mishra and Yonina C. Eldar
|
Sub-Nyquist Channel Estimation over IEEE 802.11ad Link
|
5 pages, 5 figures, SampTA 2017 conference
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, millimeter-wave communication centered at the 60 GHz radio
frequency band is increasingly the preferred technology for near-field
communication since it provides transmission bandwidth that is several GHz
wide. The IEEE 802.11ad standard has been developed for commercial wireless
local area networks in the 60 GHz transmission environment. Receivers designed
to process IEEE 802.11ad waveforms employ very high rate analog-to-digital
converters, and therefore, reducing the receiver sampling rate can be useful.
In this work, we study the problem of low-rate channel estimation over the IEEE
802.11ad 60 GHz communication link by harnessing sparsity in the channel
impulse response. In particular, we focus on single carrier modulation and
exploit the special structure of the 802.11ad waveform embedded in the channel
estimation field of its single carrier physical layer frame. We examine various
sub-Nyquist sampling methods for this problem and recover the channel using
compressed sensing techniques. Our numerical experiments show feasibility of
our procedures up to one-seventh of the Nyquist rates with minimal performance
deterioration.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2017 20:17:15 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Aug 2017 08:26:48 GMT"
}
] | 2017-08-08T00:00:00 |
[
[
"Mishra",
"Kumar Vijay",
""
],
[
"Eldar",
"Yonina C.",
""
]
] |
new_dataset
| 0.984753 |
1707.05388
|
Matteo Ruggero Ronchi
|
Matteo Ruggero Ronchi and Pietro Perona
|
Benchmarking and Error Diagnosis in Multi-Instance Pose Estimation
|
Project page available at
http://www.vision.caltech.edu/~mronchi/projects/PoseErrorDiagnosis/; Code
available at https://github.com/matteorr/coco-analyze; published at ICCV 17
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new method to analyze the impact of errors in algorithms for
multi-instance pose estimation and a principled benchmark that can be used to
compare them. We define and characterize three classes of errors -
localization, scoring, and background - study how they are influenced by
instance attributes and their impact on an algorithm's performance. Our
technique is applied to compare the two leading methods for human pose
estimation on the COCO Dataset, measure the sensitivity of pose estimation with
respect to instance size, type and number of visible keypoints, clutter due to
multiple instances, and the relative score of instances. The performance of
algorithms, and the types of error they make, are highly dependent on all these
variables, but mostly on the number of keypoints and the clutter. The analysis
and software tools we propose offer a novel and insightful approach for
understanding the behavior of pose estimation algorithms and an effective
method for measuring their strengths and weaknesses.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2017 20:32:37 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Aug 2017 00:55:29 GMT"
}
] | 2017-08-08T00:00:00 |
[
[
"Ronchi",
"Matteo Ruggero",
""
],
[
"Perona",
"Pietro",
""
]
] |
new_dataset
| 0.999002 |
1708.01646
|
Zeev Dvir
|
Zeev Dvir, Benjamin Edelman
|
Matrix rigidity and the Croot-Lev-Pach lemma
|
5 pages
| null | null | null |
cs.CC math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Matrix rigidity is a notion put forth by Valiant as a means for proving
arithmetic circuit lower bounds. A matrix is rigid if it is far, in Hamming
distance, from any low rank matrix. Despite decades of efforts, no explicit
matrix rigid enough to carry out Valiant's plan has been found. Recently, Alman
and Williams showed, contrary to common belief, that the $2^n \times 2^n$
Hadamard matrix could not be used for Valiant's program as it is not
sufficiently rigid. In this note we observe a similar `non rigidity' phenomena
for any $q^n \times q^n$ matrix $M$ of the form $M(x,y) = f(x+y)$, where
$f:F_q^n \to F_q$ is any function and $F_q$ is a fixed finite field of $q$
elements ($n$ goes to infinity). The theorem follows almost immediately from a
recent lemma of Croot, Lev and Pach which is also the main ingredient in the
recent solution of the cap-set problem.
|
[
{
"version": "v1",
"created": "Fri, 4 Aug 2017 19:23:31 GMT"
}
] | 2017-08-08T00:00:00 |
[
[
"Dvir",
"Zeev",
""
],
[
"Edelman",
"Benjamin",
""
]
] |
new_dataset
| 0.992526 |
1708.01650
|
Daniela Micucci
|
Fabrizio Pastore, Leonardo Mariani, Daniela Micucci
|
BDCI: Behavioral Driven Conflict Identification
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Source Code Management (SCM) systems support software evolution by providing
features, such as version control, branching, and conflict detection. Despite
the presence of these features, support to parallel software development is
often limited. SCM systems can only address a subset of the conflicts that
might be introduced by developers when concurrently working on multiple
parallel branches. In fact, SCM systems can detect textual conflicts, which are
generated by the concurrent modification of the same program locations, but
they are unable to detect higher-order conflicts, which are generated by the
concurrent modification of different program locations that generate program
misbehaviors once merged. Higher-order conflicts are painful to detect and
expensive to fix because they might be originated by the interference of
apparently unrelated changes. In this paper we present Behavioral Driven
Conflict Identification (BDCI), a novel approach to conflict detection. BDCI
moves the analysis of conflicts from the source code level to the level of
program behavior by generating and comparing behavioral models. The analysis
based on behavioral models can reveal interfering changes as soon as they are
introduced in the SCM system, even if they do not introduce any textual
conflict. To evaluate the effectiveness and the cost of the proposed approach,
we developed BDCIf , a specific instance of BDCI dedicated to the detection of
higher-order conflicts related to the functional behavior of a program. The
evidence collected by analyzing multiple versions of Git and Redis suggests
that BDCIf can effectively detect higher-order conflicts and report how changes
might interfere.
|
[
{
"version": "v1",
"created": "Fri, 4 Aug 2017 19:36:16 GMT"
}
] | 2017-08-08T00:00:00 |
[
[
"Pastore",
"Fabrizio",
""
],
[
"Mariani",
"Leonardo",
""
],
[
"Micucci",
"Daniela",
""
]
] |
new_dataset
| 0.999793 |
1708.01670
|
Robert Maier
|
Robert Maier, Kihwan Kim, Daniel Cremers, Jan Kautz, Matthias
Nie{\ss}ner
|
Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and
Geometry Optimization with Spatially-Varying Lighting
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a novel method to obtain high-quality 3D reconstructions from
consumer RGB-D sensors. Our core idea is to simultaneously optimize for
geometry encoded in a signed distance field (SDF), textures from
automatically-selected keyframes, and their camera poses along with material
and scene lighting. To this end, we propose a joint surface reconstruction
approach that is based on Shape-from-Shading (SfS) techniques and utilizes the
estimation of spatially-varying spherical harmonics (SVSH) from subvolumes of
the reconstructed scene. Through extensive examples and evaluations, we
demonstrate that our method dramatically increases the level of detail in the
reconstructed scene geometry and contributes highly to consistent surface
texture recovery.
|
[
{
"version": "v1",
"created": "Fri, 4 Aug 2017 21:34:46 GMT"
}
] | 2017-08-08T00:00:00 |
[
[
"Maier",
"Robert",
""
],
[
"Kim",
"Kihwan",
""
],
[
"Cremers",
"Daniel",
""
],
[
"Kautz",
"Jan",
""
],
[
"Nießner",
"Matthias",
""
]
] |
new_dataset
| 0.974871 |
1708.01776
|
Clemens Rosenbaum
|
Clemens Rosenbaum, Tian Gao, Tim Klinger
|
e-QRAQ: A Multi-turn Reasoning Dataset and Simulator with Explanations
|
7 pages, 3 figures, presented at 2017 ICML Workshop on Human
Interpretability in Machine Learning (WHI 2017), Sydney, NSW, Australia
| null | null | null |
cs.LG cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present a new dataset and user simulator e-QRAQ (explainable
Query, Reason, and Answer Question) which tests an Agent's ability to read an
ambiguous text; ask questions until it can answer a challenge question; and
explain the reasoning behind its questions and answer. The User simulator
provides the Agent with a short, ambiguous story and a challenge question about
the story. The story is ambiguous because some of the entities have been
replaced by variables. At each turn the Agent may ask for the value of a
variable or try to answer the challenge question. In response the User
simulator provides a natural language explanation of why the Agent's query or
answer was useful in narrowing down the set of possible answers, or not. To
demonstrate one potential application of the e-QRAQ dataset, we train a new
neural architecture based on End-to-End Memory Networks to successfully
generate both predictions and partial explanations of its current understanding
of the problem. We observe a strong correlation between the quality of the
prediction and explanation.
|
[
{
"version": "v1",
"created": "Sat, 5 Aug 2017 15:06:56 GMT"
}
] | 2017-08-08T00:00:00 |
[
[
"Rosenbaum",
"Clemens",
""
],
[
"Gao",
"Tian",
""
],
[
"Klinger",
"Tim",
""
]
] |
new_dataset
| 0.982015 |
1708.01797
|
Trevor Brown
|
Maya Arbel-Raviv and Trevor Brown
|
Reuse, don't Recycle: Transforming Lock-free Algorithms that Throw Away
Descriptors
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In many lock-free algorithms, threads help one another, and each operation
creates a descriptor that describes how other threads should help it.
Allocating and reclaiming descriptors introduces significant space and time
overhead. We introduce the first descriptor abstract data type (ADT), which
captures the usage of descriptors by lock-free algorithms. We then develop a
weak descriptor ADT which has weaker semantics, but can be implemented
significantly more efficiently. We show how a large class of lock-free
algorithms can be transformed to use weak descriptors, and demonstrate our
technique by transforming several algorithms, including the leading
k-compare-and-swap (k-CAS) algorithm. The original k-CAS algorithm allocates at
least k+1 new descriptors per k-CAS. In contrast, our implementation allocates
two descriptors per process, and each process simply reuses its two
descriptors. Experiments on a variety of workloads show significant performance
improvements over implementations that reclaim descriptors, and reductions of
up to three orders of magnitude in peak memory usage.
|
[
{
"version": "v1",
"created": "Sat, 5 Aug 2017 18:04:26 GMT"
}
] | 2017-08-08T00:00:00 |
[
[
"Arbel-Raviv",
"Maya",
""
],
[
"Brown",
"Trevor",
""
]
] |
new_dataset
| 0.991603 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.