Unnamed: 0
int64 0
41k
| title
stringlengths 4
274
| category
stringlengths 5
18
| summary
stringlengths 22
3.66k
| theme
stringclasses 8
values |
---|---|---|---|---|
3,300 |
Assessing Visual Quality of Omnidirectional Videos
|
eess.IV
|
In contrast with traditional video, omnidirectional video enables spherical
viewing direction with support for head-mounted displays, providing an
interactive and immersive experience. Unfortunately, to the best of our
knowledge, there are few visual quality assessment (VQA) methods, either
subjective or objective, for omnidirectional video coding. This paper proposes
both subjective and objective methods for assessing quality loss in encoding
omnidirectional video. Specifically, we first present a new database, which
includes the viewing direction data from several subjects watching
omnidirectional video sequences. Then, from our database, we find a high
consistency in viewing directions across different subjects. The viewing
directions are normally distributed in the center of the front regions, but
they sometimes fall into other regions, related to video content. Given this
finding, we present a subjective VQA method for measuring difference mean
opinion score (DMOS) of the whole and regional omnidirectional video, in terms
of overall DMOS (O-DMOS) and vectorized DMOS (V-DMOS), respectively. Moreover,
we propose two objective VQA methods for encoded omnidirectional video, in
light of human perception characteristics of omnidirectional video. One method
weighs the distortion of pixels with regard to their distances to the center of
front regions, which considers human preference in a panorama. The other method
predicts viewing directions according to video content, and then the predicted
viewing directions are leveraged to allocate weights to the distortion of each
pixel in our objective VQA method. Finally, our experimental results verify
that both the subjective and objective methods proposed in this paper advance
state-of-the-art VQA for omnidirectional video.
|
electrics
|
3,301 |
Image Acquisition System Using On Sensor Compressed Sampling Technique
|
eess.IV
|
Advances in CMOS technology have made high resolution image sensors possible.
These image sensor pose significant challenges in terms of the amount of raw
data generated, energy efficiency and frame rate. This paper presents a new
design methodology for an imaging system and a simplified novel image sensor
pixel design to be used in such system so that Compressed Sensing (CS)
technique can be implemented easily at the sensor level. This results in
significant energy savings as it not only cuts the raw data rate but also
reduces transistor count per pixel, decreases pixel size, increases fill
factor, simplifies ADC, JPEG encoder and JPEG decoder design and decreases
wiring as well as address decoder size by half. Thus CS has the potential to
increase the resolution of image sensors for a given technology and die size
while significantly decreasing the power consumption and design complexity. We
show that it has potential to reduce power consumption by about 23%-65%.
|
electrics
|
3,302 |
Having your cake and eating it too: Scripted workflows for image manipulation
|
eess.IV
|
The reproducibility issue in science has come under increased scrutiny. One
consistent suggestion lies in the use of scripted methods or workflows for data
analysis. Image analysis is one area in science in which little can be done in
scripted methods. The SWIIM Project (Scripted Workflows to Improve Image
Manipulation) is designed to generate workflows from popular image manipulation
tools. In the project, 2 approaches are being taken to construct workflows in
the image analysis area. First, the open-source tool GIMP is being enhanced to
produce an active log (which can be run on a stand-alone basis to perform the
same manipulation). Second, the R system Shiny tool is being used to construct
a graphical user interface (GUI) which works with EBImage code to modify
images, and to produce an active log which can perform the same operations.
This process has been successful to date, but is not complete. The basic method
for each component is discussed, and example code is shown.
|
electrics
|
3,303 |
Neuromorphic adaptive edge-preserving denoising filter
|
eess.IV
|
In this paper, we present on-sensor neuromorphic vision hardware
implementation of denoising spatial filter. The mean or median spatial filters
with fixed window shape are known for its denoising ability, however, have the
drawback of blurring the object edges. The effect of blurring increases with an
increase in window size. To preserve the edge information, we propose an
adaptive spatial filter that uses neuron's ability to detect similar pixels and
calculates the mean. The analog input differences of neighborhood pixels are
converted to the chain of pulses with voltage controlled oscillator and applied
as neuron input. When the input pulses charge the neuron to equal or greater
level than its threshold, the neuron will fire, and pixels are identified as
similar. The sequence of the neuron's responses for pixels is stored in the
serial-in-parallel-out shift register. The outputs of shift registers are used
as input to the selector switches of an averaging circuit making this an
adaptive mean operation resulting in an edge preserving mean filter. System
level simulation of the hardware is conducted using 150 images from Caltech
database with added Gaussian noise to test the robustness of edge-preserving
and denoising ability of the proposed filter. Threshold values of the hardware
neuron were adjusted so that the proposed edge-preserving spatial filter
achieves optimal performance in terms of PSNR and MSE, and these results
outperforms that of the conventional mean and median filters.
|
electrics
|
3,304 |
Statistically Segregated k-Space Sampling for Accelerating Multiple-Acquisition MRI
|
eess.IV
|
A central limitation of multiple-acquisition magnetic resonance imaging (MRI)
is the degradation in scan efficiency as the number of distinct datasets grows.
Sparse recovery techniques can alleviate this limitation via randomly
undersampled acquisitions. A frequent sampling strategy is to prescribe for
each acquisition a different random pattern drawn from a common sampling
density. However, naive random patterns often contain gaps or clusters across
the acquisition dimension that in turn can degrade reconstruction quality or
reduce scan efficiency. To address this problem, a statistically-segregated
sampling method is proposed for multiple-acquisition MRI. This method generates
multiple patterns sequentially, while adaptively modifying the sampling density
to minimize k-space overlap across patterns. As a result, it improves
incoherence across acquisitions while still maintaining similar sampling
density across the radial dimension of k-space. Comprehensive simulations and
in vivo results are presented for phase-cycled balanced steady-state free
precession and multi-echo T$_2$-weighted imaging. Segregated sampling achieves
significantly improved quality in both Fourier and compressed-sensing
reconstructions of multiple-acquisition datasets.
|
electrics
|
3,305 |
Light Field Retargeting for Multi-Panel Displays
|
eess.IV
|
Light fields preserve angular information which can be retargeted to
multi-panel depth displays. Due to limited aperture size and constrained
spatial-angular sampling of many light field capture systems, the displayed
light fields provide only a narrow viewing zone in which parallax views can be
supported. In addition, multi-panel displays typically have a reduced number of
panels being able to coarsely sample depth content resulting in a layered
appearance of light fields. We propose a light field retargeting technique for
multi-panel displays that enhances the perceived parallax and achieves seamless
transition over different depths and viewing angles. This is accomplished by
slicing the captured light fields according to their depth content, boosting
the parallax, and blending the results across the panels. Displayed views are
synthesized and aligned dynamically according to the position of the viewer.
The proposed technique is outlined, simulated and verified experimentally on a
three-panel aerial display.
|
electrics
|
3,306 |
A Semi-Automated Technique for Internal Jugular Vein Segmentation in Ultrasound Images Using Active Contours
|
eess.IV
|
The assessment of the blood volume is crucial for the management of many
acute and chronic diseases. Recent studies have shown that circulating blood
volume correlates with the cross-sectional area (CSA) of the internal jugular
vein (IJV) estimated from ultrasound imagery. In this paper, a semi-automatic
segmentation algorithm is proposed using a combination of region growing and
active contour techniques to provide a fast and accurate segmentation of IJV
ultrasound videos. The algorithm is applied to track and segment the IJV across
a range of image qualities, shapes, and temporal variation. The experimental
results show that the algorithm performs well compared to expert manual
segmentation and outperforms several published algorithms incorporating speckle
tracking.
|
electrics
|
3,307 |
A Fast and Efficient Near-Lossless Image Compression using Zipper Transformation
|
eess.IV
|
Near-lossless image compression-decompression scheme is proposed in this
paper using Zipper Transformation (ZT) and inverse zipper transformation (iZT).
The proposed ZT exploits the conjugate symmetry property of Discrete Fourier
Transformation (DFT). The proposed transformation is implemented using two
different configurations: the interlacing and concatenating ZT. In order to
quantify the efficacy of the proposed transformation, we benchmark with
Discrete Cosine Transformation (DCT) and Fast Walsh Hadamard Transformation
(FWHT) in terms of lossless compression capability and computational cost.
Numerical simulations show that ZT-based compression algorithm is
near-lossless, compresses better, and offers faster implementation than both
DCT and FWHT. Also, interlacing and concatenating ZT are shown to yield similar
results in most of the test cases considered.
|
electrics
|
3,308 |
On-the-fly Adaptive $k$-Space Sampling for Linear MRI Reconstruction Using Moment-Based Spectral Analysis
|
eess.IV
|
In high-dimensional magnetic resonance imaging applications, time-consuming,
sequential acquisition of data samples in the spatial frequency domain
($k$-space) can often be accelerated by accounting for dependencies along
imaging dimensions other than space in linear reconstruction, at the cost of
noise amplification that depends on the sampling pattern. Examples are
support-constrained, parallel, and dynamic MRI, and $k$-space sampling
strategies are primarily driven by image-domain metrics that are expensive to
compute for arbitrary sampling patterns. It remains challenging to provide
systematic and computationally efficient automatic designs of arbitrary
multidimensional Cartesian sampling patterns that mitigate noise amplification,
given the subspace to which the object is confined. To address this problem,
this work introduces a theoretical framework that describes local geometric
properties of the sampling pattern and relates these properties to a measure of
the spread in the eigenvalues of the information matrix described by its first
two spectral moments. This new criterion is then used for very efficient
optimization of complex multidimensional sampling patterns that does not
require reconstructing images or explicitly mapping noise amplification.
Experiments with in vivo data show strong agreement between this criterion and
traditional, comprehensive image-domain- and $k$-space-based metrics,
indicating the potential of the approach for computationally efficient
(on-the-fly), automatic, and adaptive design of sampling patterns.
|
electrics
|
3,309 |
Silver Standard Masks for Data Augmentation Applied to Deep-Learning-Based Skull-Stripping
|
eess.IV
|
The bottleneck of convolutional neural networks (CNN) for medical imaging is
the number of annotated data required for training. Manual segmentation is
considered to be the "gold-standard". However, medical imaging datasets with
expert manual segmentation are scarce as this step is time-consuming and
expensive. We propose in this work the use of what we refer to as silver
standard masks for data augmentation in deep-learning-based skull-stripping
also known as brain extraction. We generated the silver standard masks using
the consensus algorithm Simultaneous Truth and Performance Level Estimation
(STAPLE). We evaluated CNN models generated by the silver and gold standard
masks. Then, we validated the silver standard masks for CNNs training in one
dataset, and showed its generalization to two other datasets. Our results
indicated that models generated with silver standard masks are comparable to
models generated with gold standard masks and have better generalizability.
Moreover, our results also indicate that silver standard masks could be used to
augment the input dataset at training stage, reducing the need for manual
segmentation at this step.
|
electrics
|
3,310 |
Diffraction Influence on the Field of View and Resolution of Three-Dimensional Integral Imaging
|
eess.IV
|
The influence of the diffraction limit on the field of view of
three-dimensional integral imaging (InI) systems is estimated by calculating
the resolution of the InI system along arbitrarily tilted directions. The
deteriorating effects of diffraction on the resolution are quantified in this
manner. Two different three-dimensional scenes are recorded by real/virtual and
focused imaging modes. The recorded scenes are reconstructed at different
tilted planes and the obtained results for the resolution and field of view of
the system are verified. It is shown that the diffraction effects severely
affect the resolution of InI in the real/virtual mode when the tilted angle of
viewing is increased. It is also shown that the resolution of InI in the
focused mode is more robust to the unwanted effects of diffraction even though
it is much lower than the resolution of InI in the real/virtual mode.
|
electrics
|
3,311 |
Semi-Parallel Deep Neural Networks (SPDNN), Convergence and Generalization
|
eess.IV
|
The Semi-Parallel Deep Neural Network (SPDNN) idea is explained in this
article and it has been shown that the convergence of the mixed network is very
close to the best network in the set and the generalization of SPDNN is better
than all the parent networks.
|
electrics
|
3,312 |
Exploiting Occlusion in Non-Line-of-Sight Active Imaging
|
eess.IV
|
Active non-line-of-sight imaging systems are of growing interest for diverse
applications. The most commonly proposed approaches to date rely on exploiting
time-resolved measurements, i.e., measuring the time it takes for short light
pulses to transit the scene. This typically requires expensive, specialized,
ultrafast lasers and detectors that must be carefully calibrated. We develop an
alternative approach that exploits the valuable role that natural occluders in
a scene play in enabling accurate and practical image formation in such
settings without such hardware complexity. In particular, we demonstrate that
the presence of occluders in the hidden scene can obviate the need for
collecting time-resolved measurements, and develop an accompanying analysis for
such systems and their generalizations. Ultimately, the results suggest the
potential to develop increasingly sophisticated future systems that are able to
identify and exploit diverse structural features of the environment to
reconstruct scenes hidden from view.
|
electrics
|
3,313 |
Efficient and fast algorithms to generate holograms for optical tweezers
|
eess.IV
|
We discuss and compare three algorithms for generating holograms: simple
rounding, Floyd-Steinberg error diffusion dithering, and mixed region amplitude
freedom (MRAF). The methods are optimised for producing large arrays of tightly
focused optical tweezers for trapping particles. The algorithms are compared in
terms of their speed, efficiency, and accuracy, for periodic arrangements of
traps; an arrangement of particular interest in the field of quantum computing.
We simulate the image formation using each of a binary amplitude modulating
digital mirror device (DMD) and a phase modulating spatial light modulator
(PSLM) as the display element. While a DMD allows for fast frame rates, the
slower PSLM is more efficient and provides higher accuracy with a
quasi-continuous variation of phase. We discuss the relative merits of each
algorithm for use with both a DMD and a PSLM, allowing one to choose the ideal
approach depending on the circumstances.
|
electrics
|
3,314 |
EDIZ: An Error Diffusion Image Zooming Scheme
|
eess.IV
|
Interpolation based image zooming methods provide a high execution speed and
low computational complexity. However, the quality of the zoomed images is
unsatisfactory in many cases. The main challenge of super- resolution methods
is to create new details to the image. This paper proposes a new algorithm to
create new details using a zoom-out-zoom-in strategy. This strategy permits
reducing blurring effects by adding the estimated error to the final image.
Experimental results for natural images confirm the algorithm's ability to
create visually pleasing results.
|
electrics
|
3,315 |
Cascaded Reconstruction Network for Compressive image sensing
|
eess.IV
|
The theory of compressed sensing (CS) has been successfully applied to image
compression in the past few years, whose traditional iterative reconstruction
algorithm is time-consuming. However, it has been reported deep learning-based
CS reconstruction algorithms could greatly reduce the computational complexity.
In this paper, we propose two efficient structures of cascaded reconstruction
networks corresponding to two different sampling methods in CS process. The
first reconstruction network is a compatibly sampling reconstruction network
(CSRNet), which recovers an image from its compressively sensed measurement
sampled by a traditional random matrix. In CSRNet, deep reconstruction network
module obtains an initial image with acceptable quality, which can be further
improved by residual network module based on convolutional neural network. The
second reconstruction network is adaptively sampling reconstruction network
(ASRNet), by matching automatically sampling module with corresponding residual
reconstruction module. The experimental results have shown that the proposed
two reconstruction networks outperform several state-of-the-art compressive
sensing reconstruction algorithms. Meanwhile, the proposed ASRNet can achieve
more than 1 dB gain, as compared with the CSRNet.
|
electrics
|
3,316 |
Learning Based Segmentation of CT Brain Images: Application to Post-Operative Hydrocephalic Scans
|
eess.IV
|
Objective: Hydrocephalus is a medical condition in which there is an abnormal
accumulation of cerebrospinal fluid (CSF) in the brain. Segmentation of brain
imagery into brain tissue and CSF (before and after surgery, i.e. pre-op vs.
postop) plays a crucial role in evaluating surgical treatment. Segmentation of
pre-op images is often a relatively straightforward problem and has been well
researched. However, segmenting post-operative (post-op) computational
tomographic (CT)-scans becomes more challenging due to distorted anatomy and
subdural hematoma collections pressing on the brain. Most intensity and feature
based segmentation methods fail to separate subdurals from brain and CSF as
subdural geometry varies greatly across different patients and their intensity
varies with time. We combat this problem by a learning approach that treats
segmentation as supervised classification at the pixel level, i.e. a training
set of CT scans with labeled pixel identities is employed. Methods: Our
contributions include: 1.) a dictionary learning framework that learns class
(segment) specific dictionaries that can efficiently represent test samples
from the same class while poorly represent corresponding samples from other
classes, 2.) quantification of associated computation and memory footprint, and
3.) a customized training and test procedure for segmenting post-op
hydrocephalic CT images. Results: Experiments performed on infant CT brain
images acquired from the CURE Children's Hospital of Uganda reveal the success
of our method against the state-of-the-art alternatives. We also demonstrate
that the proposed algorithm is computationally less burdensome and exhibits a
graceful degradation against number of training samples, enhancing its
deployment potential.
|
electrics
|
3,317 |
Analysis-synthesis model learning with shared features: a new framework for histopathological image classification
|
eess.IV
|
Automated histopathological image analysis offers exciting opportunities for
the early diagnosis of several medical conditions including cancer. There are
however stiff practical challenges: 1.) discriminative features from such
images for separating diseased vs. healthy classes are not readily apparent,
and 2.) distinct classes, e.g. healthy vs. stages of disease continue to share
several geometric features. We propose a novel Analysis-synthesis model
Learning with Shared Features algorithm (ALSF) for classifying such images more
effectively. In ALSF, a joint analysis and synthesis learning model is
introduced to learn the classifier and the feature extractor at the same time.
In this way, the computation load in patch-level based image classification can
be much reduced. Crucially, we integrate into this framework the learning of a
low rank shared dictionary and a shared analysis operator, which more
accurately represents both similarities and differences in histopathological
images from distinct classes. ALSF is evaluated on two challenging databases:
(1) kidney tissue images provided by the Animal Diagnosis Lab (ADL) at the
Pennsylvania State University and (2) brain tumor images from The Cancer Genome
Atlas (TCGA) database. Experimental results confirm that ALSF can offer
benefits over state of the art alternatives.
|
electrics
|
3,318 |
RIBBONS: Rapid Inpainting Based on Browsing of Neighborhood Statistics
|
eess.IV
|
Image inpainting refers to filling missing places in images using neighboring
pixels. It also has many applications in different tasks of image processing.
Most of these applications enhance the image quality by significant unwanted
changes or even elimination of some existing pixels. These changes require
considerable computational complexities which in turn results in remarkable
processing time. In this paper we propose a fast inpainting algorithm called
RIBBONS based on selection of patches around each missing pixel. This would
accelerate the execution speed and the capability of online frame inpainting in
video. The applied cost-function is a combination of statistical and spatial
features in all neighboring pixels. We evaluate some candidate patches using
the proposed cost function and minimize it to achieve the final patch.
Experimental results show the higher speed of 'Ribbons' in comparison with
previous methods while being comparable in terms of PSNR and SSIM for the
images in MISC dataset.
|
electrics
|
3,319 |
Image Inpainting by Hyperbolic Selection of Pixels for Two Dimensional Bicubic Interpolations
|
eess.IV
|
Image inpainting is a restoration process which has numerous applications.
Restoring of scanned old images with scratches, or removing objects in images
are some of inpainting applications. Different approaches have been used for
implementation of inpainting algorithms. Interpolation approaches only consider
one direction for this purpose. In this paper we present a new perspective to
image inpainting. We consider multiple directions and apply both
one-dimensional and two-dimensional bicubic interpolations. Neighboring pixels
are selected in a hyperbolic formation to better preserve corner pixels. We
compare our work with recent inpainting approaches to show our superior
results.
|
electrics
|
3,320 |
Predicting Encoded Picture Quality in Two Steps is a Better Way
|
eess.IV
|
Full-reference (FR) image quality assessment (IQA) models assume a high
quality "pristine" image as a reference against which to measure perceptual
image quality. In many applications, however, the assumption that the reference
image is of high quality may be untrue, leading to incorrect perceptual quality
predictions. To address this, we propose a new two-step image quality
prediction approach which integrates both no-reference (NR) and full-reference
perceptual quality measurements into the quality prediction process. The
no-reference module accounts for the possibly imperfect quality of the source
(reference) image, while the full-reference component measures the quality
differences between the source image and its possibly further distorted
version. A simple, yet very efficient, multiplication step fuses the two
sources of information into a reliable objective prediction score. We evaluated
our two-step approach on a recently designed subjective image database and
achieved standout performance compared to full-reference approaches, especially
when the reference images were of low quality. The proposed approach is made
publicly available at https://github.com/xiangxuyu/2stepQA
|
electrics
|
3,321 |
Mosaicked multispectral image compression based on inter- and intra-band correlation
|
eess.IV
|
Multispectral imaging has been utilized in many fields, but the cost of
capturing and storing image data is still high. Single-sensor cameras with
multispectral filter arrays can reduce the cost of capturing images at the
expense of slightly lower image quality. When multispectral filter arrays are
used, conventional multispectral image compression methods can be applied after
interpolation, but the compressed image data after interpolation has some
redundancy because the interpolated data are computed from the captured raw
data. In this paper, we propose an efficient image compression method for
single-sensor multispectral cameras. The proposed method encodes the captured
multispectral data before interpolation. We also propose a new spectral
transform method for the compression of mosaicked multispectral images. This
transform is designed by considering the filter arrangement and the spectral
sensitivities of a multispectral filter array. The experimental results show
that the proposed method achieves a higher peak signal-to-noise ratio at higher
bit rates than a conventional compression method that encodes a multispectral
image after interpolation, e.g., 3-dB gain over conventional compression when
coding at rates of over 0.1 bit/pixel/bands.
|
electrics
|
3,322 |
Region of Interest (ROI) Coding for Aerial Surveillance Video using AVC & HEVC
|
eess.IV
|
Aerial surveillance from Unmanned Aerial Vehicles (UAVs), i.e. with moving
cameras, is of growing interest for police as well as disaster area monitoring.
For more detailed ground images the camera resolutions are steadily increasing.
Simultaneously the amount of video data to transmit is increasing
significantly, too. To reduce the amount of data, Region of Interest (ROI)
coding systems were introduced which mainly encode some regions in higher
quality at the cost of the remaining image regions. We employ an existing ROI
coding system relying on global motion compensation to retain full image
resolution over the entire image. Different ROI detectors are used to
automatically classify a video image on board of the UAV in ROI and non-ROI. We
propose to replace the modified Advanced Video Coding (AVC) video encoder by a
modified High Efficiency Video Coding (HEVC) encoder. Without any change of the
detection system itself, but by replacing the video coding back-end we are able
to improve the coding efficiency by 32% on average although regular HEVC
provides coding gains of 12-30% only for the same test sequences and similar
PSNR compared to regular AVC coding. Since the employed ROI coding mainly
relies on intra mode coding of new emerging image areas, gains of HEVC-ROI
coding over AVC-ROI coding compared to regular coding of the entire frames
including predictive modes (inter) depend on sequence characteristics. We
present a detailed analysis of bit distribution within the frames to explain
the gains. In total we can provide coding data rates of 0.7-1.0 Mbit/s for full
HDTV video sequences at 30 fps at reasonable quality of more than 37 dB.
|
electrics
|
3,323 |
Object-based Multipass InSAR via Robust Low Rank Tensor Decomposition
|
eess.IV
|
The most unique advantage of multipass SAR interferometry (InSAR) is the
retrieval of long term geophysical parameters, e.g. linear deformation rates,
over large areas. Recently, an object-based multipass InSAR framework has been
proposed in [1], as an alternative to the typical single-pixel methods, e.g.
Persistent Scatterer Interferometry (PSI), or pixel-cluster-based methods, e.g.
SqueeSAR. This enables the exploitation of inherent properties of InSAR phase
stacks on an object level. As a followon, this paper investigates the inherent
low rank property of such phase tensors, and proposes a Robust Multipass InSAR
technique via Object-based low rank tensor decomposition (RoMIO). We
demonstrate that the filtered InSAR phase stacks can improve the accuracy of
geophysical parameters estimated via conventional multipass InSAR techniques,
e.g. PSI, by a factor of ten to thirty in typical settings. The proposed method
is particularly effective against outliers, such as pixels with unmodeled
phases. These merits in turn can effectively reduce the number of images
required for a reliable estimation. The promising performance of the proposed
method is demonstrated using high-resolution TerraSAR-X image stacks.
|
electrics
|
3,324 |
Snapshot light-field laryngoscope
|
eess.IV
|
The convergence of recent advances in optical fabrication and digital
processing yields a new generation of imaging technology: light-field cameras,
which bridge the realms of applied mathematics, optics, and high-performance
computing. Herein for the first time, we introduce the paradigm of light-field
imaging into laryngoscopy. The resultant probe can image the three-dimensional
(3D) shape of vocal folds within a single camera exposure. Furthermore, to
improve the spatial resolution, we developed an image fusion algorithm,
providing a simple solution to a long-standing problem in light-field imaging.
|
electrics
|
3,325 |
Reconstruction of Compressively Sensed Images using Convex Tikhonov Sparse Dictionary Learning and Adaptive Spectral Filtering
|
eess.IV
|
Sparse representation using over-complete dictionaries have shown to produce
good quality results in various image processing tasks. Dictionary learning
algorithms have made it possible to engineer data adaptive dictionaries which
have promising applications in image compression and image enhancement. The
most common sparse dictionary learning algorithms use the techniques of
matching pursuit and K-SVD iteratively for sparse coding and dictionary
learning respectively. While this technique produces good results, it requires
a large number of iterations to converge to an optimal solution. In this
article, we use a closed form stabilized convex optimization technique for both
sparse coding and dictionary learning. The approach results in providing the
best possible dictionary and the sparsest representation resulting in minimum
reconstruction error. Once the image is reconstructed from the compressively
sensed samples, we use adaptive frequency and spatial filtering techniques to
move towards exact image recovery. It is clearly seen from the results that the
proposed algorithm provides much better reconstruction results than
conventional sparse dictionary techniques for a fixed number of iterations.
Depending inversely upon the number of details present in the image, the
proposed algorithm reaches the optimal solution with a significantly lower
number of iterations. Consequently, high PSNR and low MSE is obtained using the
proposed algorithm for our compressive sensing framework.
|
electrics
|
3,326 |
Polyp Segmentation in Colonoscopy Images Using Fully Convolutional Network
|
eess.IV
|
Colorectal cancer is a one of the highest causes of cancer-related death,
especially in men. Polyps are one of the main causes of colorectal cancer and
early diagnosis of polyps by colonoscopy could result in successful treatment.
Diagnosis of polyps in colonoscopy videos is a challenging task due to
variations in the size and shape of polyps. In this paper we proposed a polyp
segmentation method based on convolutional neural network. Performance of the
method is enhanced by two strategies. First, we perform a novel image patch
selection method in the training phase of the network. Second, in the test
phase, we perform an effective post processing on the probability map that is
produced by the network. Evaluation of the proposed method using the
CVC-ColonDB database shows that our proposed method achieves more accurate
results in comparison with previous colonoscopy video-segmentation methods.
|
electrics
|
3,327 |
Face Synthesis with Landmark Points from Generative Adversarial Networks and Inverse Latent Space Mapping
|
eess.IV
|
Facial landmarks refer to the localization of fundamental facial points on
face images. There have been a tremendous amount of attempts to detect these
points from facial images however, there has never been an attempt to
synthesize a random face and generate its corresponding facial landmarks. This
paper presents a framework for augmenting a dataset in a latent Z-space and
applied to the regression problem of generating a corresponding set of
landmarks from a 2D facial dataset. The BEGAN framework has been used to train
a face generator from CelebA database. The inverse of the generator is
implemented using an Adam optimizer to generate the latent vector corresponding
to each facial image, and a lightweight deep neural network is trained to map
latent Z-space vectors to the landmark space. Initial results are promising and
provide a generic methodology to augment annotated image datasets with
additional intermediate samples.
|
electrics
|
3,328 |
Satellite Image Scene Classification via ConvNet with Context Aggregation
|
eess.IV
|
Scene classification is a fundamental problem to understand the
high-resolution remote sensing imagery. Recently, convolutional neural network
(ConvNet) has achieved remarkable performance in different tasks, and
significant efforts have been made to develop various representations for
satellite image scene classification. In this paper, we present a novel
representation based on a ConvNet with context aggregation. The proposed
two-pathway ResNet (ResNet-TP) architecture adopts the ResNet as backbone, and
the two pathways allow the network to model both local details and regional
context. The ResNet-TP based representation is generated by global average
pooling on the last convolutional layers from both pathways. Experiments on two
scene classification datasets, UCM Land Use and NWPU-RESISC45, show that the
proposed mechanism achieves promising improvements over state-of-the-art
methods.
|
electrics
|
3,329 |
Efficient Nonlinear Transforms for Lossy Image Compression
|
eess.IV
|
We assess the performance of two techniques in the context of nonlinear
transform coding with artificial neural networks, Sadam and GDN. Both
techniques have been successfully used in state-of-the-art image compression
methods, but their performance has not been individually assessed to this
point. Together, the techniques stabilize the training procedure of nonlinear
image transforms and increase their capacity to approximate the (unknown)
rate-distortion optimal transform functions. Besides comparing their
performance to established alternatives, we detail the implementation of both
methods and provide open-source code along with the paper.
|
electrics
|
3,330 |
Determining JPEG Image Standard Quality Factor from the Quantization Tables
|
eess.IV
|
Identifying the quality factor of JPEG images is very useful for applications
in digital image forensics. Though several command-line tools exist and are
used in widely used software such as \emph{GIMP} (GNU Image Manipulation
Program), the well-known image editing software, or the \emph{ImageMagick}
suite, we have found that those may provide inaccurate or even wrong results.
This paper presents a simple method for determining the exact quality factor of
a JPEG image from its quantization tables. The method is presented briefly and
a sample program, written in Unix/Linux Shell bash language is provided.
|
electrics
|
3,331 |
Classification of Informative Frames in Colonoscopy Videos Using Convolutional Neural Networks with Binarized Weights
|
eess.IV
|
Colorectal cancer is one of the common cancers in the United States. Polyp is
one of the main causes of the colonic cancer and early detection of polyps will
increase chance of cancer treatments. In this paper, we propose a novel
classification of informative frames based on a convolutional neural network
with binarized weights. The proposed CNN is trained with colonoscopy frames
along with the labels of the frames as input data. We also used binarized
weights and kernels to reduce the size of CNN and make it suitable for
implementation in medical hardware. We evaluate our proposed method using Asu
Mayo Test clinic database, which contains colonoscopy videos of different
patients. Our proposed method reaches a dice score of 71.20% and accuracy of
more than 90% using the mentioned dataset.
|
electrics
|
3,332 |
A Practical Guide to Multi-image Alignment
|
eess.IV
|
Multi-image alignment, bringing a group of images into common register, is an
ubiquitous problem and the first step of many applications in a wide variety of
domains. As a result, a great amount of effort is being invested in developing
efficient multi-image alignment algorithms. Little has been done, however, to
answer fundamental practical questions such as: what is the comparative
performance of existing methods? is there still room for improvement? under
which conditions should one technique be preferred over another? does adding
more images or prior image information improve the registration results? In
this work, we present a thorough analysis and evaluation of the main
multi-image alignment methods which, combined with theoretical limits in
multi-image alignment performance, allows us to organize them under a common
framework and provide practical answers to these essential questions.
|
electrics
|
3,333 |
On variational solutions for whole brain serial-section histology using the computational anatomy random orbit model
|
eess.IV
|
This paper presents a variational framework for dense diffeomorphic
atlas-mapping onto high-throughput histology stacks at the 20 um meso-scale.
The observed sections are modelled as Gaussian random fields conditioned on a
sequence of unknown section by section rigid motions and unknown diffeomorphic
transformation of a three-dimensional atlas. To regularize over the
high-dimensionality of our parameter space (which is a product space of the
rigid motion dimensions and the diffeomorphism dimensions), the histology
stacks are modelled as arising from a first order Sobolev space smoothness
prior. We show that the joint maximum a-posteriori, penalized-likelihood
estimator of our high dimensional parameter space emerges as a joint
optimization interleaving rigid motion estimation for histology restacking and
large deformation diffeomorphic metric mapping to atlas coordinates. We show
that joint optimization in this parameter space solves the classical curvature
non-identifiability of the histology stacking problem. The algorithms are
demonstrated on a collection of whole-brain histological image stacks from the
Mouse Brain Architecture Project.
|
electrics
|
3,334 |
Estimating Depth-Salient Edges And its Application To Stereoscopic Image Quality Assessment
|
eess.IV
|
The human visual system pays attention to salient regions while perceiving an
image. When viewing a stereoscopic 3D (S3D) image, we hypothesize that while
most of the contribution to saliency is provided by the 2D image, a small but
significant contribution is provided by the depth component. Further, we claim
that only a subset of image edges contribute to depth perception while viewing
an S3D image. In this paper, we propose a systematic approach for depth
saliency estimation, called Salient Edges with respect to Depth perception
(SED) which localizes the depth-salient edges in an S3D image. We demonstrate
the utility of SED in full reference stereoscopic image quality assessment
(FRSIQA). We consider gradient magnitude and inter-gradient maps for predicting
structural similarity. A coarse quality estimate is derived first by comparing
the 2D saliency and gradient maps of reference and test stereo pairs. We refine
this quality using SED maps for evaluating depth quality. Finally, we combine
this luminance and depth quality to obtain an overall stereo image quality. We
perform a comprehensive evaluation of our metric on seven publicly available
S3D IQA databases. The proposed metric shows competitive performance on all
seven databases with state-of-the-art performance on three of them.
|
electrics
|
3,335 |
The Coupled TuFF-BFF Algorithm for Automatic 3D Segmentation of Microglia
|
eess.IV
|
We propose an automatic 3D segmentation algorithm for multiphoton microscopy
images of microglia. Our method is capable of segmenting tubular and blob-like
structures from noisy images. Current segmentation techniques and software fail
to capture the fine processes and soma of the microglia cells, useful for the
study of the microglia role in the brain during healthy and diseased states.
Our coupled tubularity flow field (TuFF)-blob flow field (BFF) method evolves a
level set toward the object boundary using the directional tubularity and
blobness measure of 3D images. Our method found a 20% performance increase
against state of the art segmentation methods on a dataset of 3D images of
microglia even in images with intensity heterogeneity throughout the object.
The coupled TuFF-BFF segmentation results also yielded 40% improvement in
accuracy for the ramification index of the processes, which displays the
efficacy of our method.
|
electrics
|
3,336 |
On Random-Matrix Bases, Ghost Imaging and X-ray Phase Contrast Computational Ghost Imaging
|
eess.IV
|
A theory of random-matrix bases is presented, including expressions for
orthogonality, completeness and the random-matrix synthesis of arbitrary
matrices. This is applied to ghost imaging as the realization of a random-basis
reconstruction, including an expression for the resulting signal-to-noise
ratio. Analysis of conventional direct imaging and ghost imaging leads to a
criterion which, when satisfied, implies reduced dose for computational ghost
imaging. We also propose an experiment for x-ray phase contrast computational
ghost imaging, which enables differential phase contrast to be achieved in an
x-ray ghost imaging context. We give a numerically robust solution to the
associated inverse problem of decoding differential phase contrast x-ray ghost
images, to yield a quantitative map of the projected thickness of the sample.
|
electrics
|
3,337 |
Time-Series Based Thermography on Concrete Block Void Detection
|
eess.IV
|
Using thermography as a nondestructive method for subsurface detection of the
concrete structure has been developed for decades. However, the performance of
current practice is limited due to the heavy reliance on the environmental
conditions as well as complex environmental noises. A non-time-series method
suffers from the issue of solar radiation reflected by the target during
heating stage, and issues of potential non-uniform heat distribution. These
limitations are the major constraints of the traditional single thermal image
method. Time series-based methods such as Fourier transform-based pulse phase
thermography, principle component thermography, and high order statistics have
been reported with robust results on surface reflective property difference and
non-uniform heat distribution under the experimental setting. This paper aims
to compare the performance of above methods to that of the conventional static
thermal imaging method. The case used for the comparison is to detect voids in
a hollow concrete block during the heating phase. The result was quantitatively
evaluated by using Signal-to-Noise Ratio. Favorable performance was observed
using time-series methods compared to the single image approach.
|
electrics
|
3,338 |
Nonlinear Shape Regression For Filtering Segmentation Results From Calcium Imaging
|
eess.IV
|
A shape filter is presented to repair segmentation results obtained in
calcium imaging of neurons in vivo. This post-segmentation algorithm can
automatically smooth the shapes obtained from a preliminary segmentation, while
precluding the cases where two neurons are counted as one combined component.
The shape filter is realized using a square-root velocity to project the shapes
on a shape manifold in which distances between shapes are based on elastic
changes. Two data-driven weighting methods are proposed to achieve a trade-off
between shape smoothness and consistency with the data. Intuitive comparisons
of proposed methods via projection onto Cartesian maps demonstrate the
smoothing ability of the shape filter. Quantitative measures also prove the
superiority of our methods over models that do not employ any weighting
criterion.
|
electrics
|
3,339 |
OSLO: Automatic Cell Counting and Segmentation for Oligodendrocyte Progenitor Cells
|
eess.IV
|
Reliable cell counting and segmentation of oligodendrocyte progenitor cells
(OPCs) are critical image analysis steps that could potentially unlock
mysteries regarding OPC function during pathology. We propose a saliency-based
method to detect OPCs and use a marker-controlled watershed algorithm to
segment the OPCs. This method first implements frequency-tuned saliency
detection on separate channels to obtain regions of cell candidates. Final
detection results and internal markers can be computed by combining information
from separate saliency maps. An optimal saliency level for OPCs (OSLO) is
highlighted in this work. Here, watershed segmentation is performed efficiently
with effective internal markers. Experiments show that our method outperforms
existing methods in terms of accuracy.
|
electrics
|
3,340 |
Computational Image Enhancement for Frequency Modulated Continuous Wave (FMCW) THz Image
|
eess.IV
|
In this paper, a novel method to enhance Frequency Modulated Continuous Wave
(FMCW) THz imaging resolution beyond its diffraction limit is proposed. Our
method comprises two stages. Firstly, we reconstruct the signal in
depth-direction using a sinc-envelope, yielding a significant improvement in
depth estimation and signal parameter extraction. The resulting high precision
depth estimate is used to deduce an accurate reflection intensity THz image.
This image is fed in the second stage of our method to a 2D blind deconvolution
procedure, adopted to enhance the lateral THz image resolution beyond the
diffraction limit. Experimental data acquired with a FMCW system operating at
577 GHz with a bandwidth of 126 GHz shows that the proposed method enhances the
lateral resolution by a factor of 2.29 to 346.2um with respect to the
diffraction limit. The depth accuracy is 91um. Interestingly, the lateral
resolution enhancement achieved with this blind deconvolution concept leads to
better results in comparison to conventional gaussian deconvolution.
Experimental data on a PCB resolution target is presented, in order to quantify
the resolution enhancement and to compare the performance with established
image enhancement approaches. The presented technique allows exposure of the
interwoven fibre reinforced embedded structures of the PCB test sample.
|
electrics
|
3,341 |
Algorithmic improvements for the CIECAM02 and CAM16 color appearance models
|
eess.IV
|
This note is concerned with the CIECAM02 color appearance model and its
successor, the CAM16 color appearance model. Several algorithmic flaws are
pointed out and remedies are suggested. The resulting color model is
algebraically equivalent to CIECAM02/CAM16, but shorter, more efficient, and
works correctly for all edge cases.
|
electrics
|
3,342 |
Multispectral Focal Stack Acquisition Using A Chromatic Aberration Enlarged Camera
|
eess.IV
|
Capturing more information, e.g. geometry and material, using optical cameras
can greatly help the perception and understanding of complex scenes. This paper
proposes a novel method to capture the spectral and light field information
simultaneously. By using a delicately designed chromatic aberration enlarged
camera, the spectral-varying slices at different depths of the scene can be
easily captured. Afterwards, the multispectral focal stack, which is composed
of a stack of multispectral slice images focusing on different depths, can be
recovered from the spectral-varying slices by using a Local Linear
Transformation (LLT) based algorithm. The experiments verify the effectiveness
of the proposed method.
|
electrics
|
3,343 |
Towards Automatic SAR-Optical Stereogrammetry over Urban Areas using Very High Resolution Imagery
|
eess.IV
|
In this paper we discuss the potential and challenges regarding SAR-optical
stereogrammetry for urban areas, using very-high-resolution (VHR) remote
sensing imagery. Since we do this mainly from a geometrical point of view, we
first analyze the height reconstruction accuracy to be expected for different
stereogrammetric configurations. Then, we propose a strategy for simultaneous
tie point matching and 3D reconstruction, which exploits an epipolar-like
search window constraint. To drive the matching and ensure some robustness, we
combine different established handcrafted similarity measures. For the
experiments, we use real test data acquired by the Worldview-2, TerraSAR-X and
MEMPHIS sensors. Our results show that SAR-optical stereogrammetry using VHR
imagery is generally feasible with 3D positioning accuracies in the
meter-domain, although the matching of these strongly hetereogeneous
multi-sensor data remains very challenging. Keywords: Synthetic Aperture Radar
(SAR), optical images, remote sensing, data fusion, stereogrammetry
|
electrics
|
3,344 |
Temporo-Spatial Collaborative Filtering for Parameter Estimation in Noisy DCE-MRI Sequences: Application to Breast Cancer Chemotherapy Response
|
eess.IV
|
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a minimally
invasive imaging technique which can be used for characterizing tumor biology
and tumor response to radiotherapy. Pharmacokinetic (PK) estimation is widely
used for DCE-MRI data analysis to extract quantitative parameters relating to
microvascu- lature characteristics of the cancerous tissues. Unavoidable noise
corruption during DCE-MRI data acquisition has a large effect on the accuracy
of PK estimation. In this paper, we propose a general denoising paradigm called
gather- noise attenuation and reduce (GNR) and a novel temporal-spatial
collaborative filtering (TSCF) denoising technique for DCE-MRI data. TSCF takes
advantage of temporal correlation in DCE-MRI, as well as anatomical spatial
similar- ity to collaboratively filter noisy DCE-MRI data. The proposed TSCF
denoising algorithm decreases the PK parameter normalized estimation error by
57% and improves the structural similarity of PK parameter estimation by 86%
com- pared to baseline without denoising, while being an order of magnitude
faster than state-of-the-art denoising methods. TSCF improves the univariate
linear regression (ULR) c-statistic value for early prediction of pathologic
response up to 18%, and shows complete separation of pathologic complete
response (pCR) and non-pCR groups on a challenge dataset.
|
electrics
|
3,345 |
Large-Scale Study of Perceptual Video Quality
|
eess.IV
|
The great variations of videographic skills, camera designs, compression and
processing protocols, and displays lead to an enormous variety of video
impairments. Current no-reference (NR) video quality models are unable to
handle this diversity of distortions. This is true in part because available
video quality assessment databases contain very limited content, fixed
resolutions, were captured using a small number of camera devices by a few
videographers and have been subjected to a modest number of distortions. As
such, these databases fail to adequately represent real world videos, which
contain very different kinds of content obtained under highly diverse imaging
conditions and are subject to authentic, often commingled distortions that are
impossible to simulate. As a result, NR video quality predictors tested on
real-world video data often perform poorly. Towards advancing NR video quality
prediction, we constructed a large-scale video quality assessment database
containing 585 videos of unique content, captured by a large number of users,
with wide ranges of levels of complex, authentic distortions. We collected a
large number of subjective video quality scores via crowdsourcing. A total of
4776 unique participants took part in the study, yielding more than 205000
opinion scores, resulting in an average of 240 recorded human opinions per
video. We demonstrate the value of the new resource, which we call the LIVE
Video Quality Challenge Database (LIVE-VQC), by conducting a comparison of
leading NR video quality predictors on it. This study is the largest video
quality assessment study ever conducted along several key dimensions: number of
unique contents, capture devices, distortion types and combinations of
distortions, study participants, and recorded subjective scores. The database
is available for download on this link:
http://live.ece.utexas.edu/research/LIVEVQC/index.html .
|
electrics
|
3,346 |
Tracking of the Internal Jugular Vein in Ultrasound Images Using Optical Flow
|
eess.IV
|
Detection of relative changes in circulating blood volume is important to
guide resuscitation and manage variety of medical conditions including sepsis,
trauma, dialysis and congestive heart failure. Recent studies have shown that
estimates of circulating blood volume can be obtained from ultrasound imagery
of the of the internal jugular vein (IJV). However, segmentation and tracking
of the IJV is significantly influenced by speckle noise and shadowing which
introduce uncertainty in the boundaries of the vessel. In this paper, we
investigate the use of optical flow algorithms for segmentation and tracking of
the IJV and show that the classical Lucas-Kanade (LK) algorithm provides the
best performance among well-known flow tracking algorithms.
|
electrics
|
3,347 |
Saliency Inspired Quality Assessment of Stereoscopic 3D Video
|
eess.IV
|
To study the visual attentional behavior of Human Visual System (HVS) on 3D
content, eye tracking experiments are performed and Visual Attention Models
(VAMs) are designed. One of the main applications of these VAMs is in quality
assessment of 3D video. The usage of 2D VAMs in designing 2D quality metrics is
already well explored. This paper investigates the added value of incorporating
3D VAMs into Full-Reference (FR) and No-Reference (NR) quality assessment
metrics for stereoscopic 3D video. To this end, state-of-the-art 3D VAMs are
integrated to quality assessment pipeline of various existing FR and NR
stereoscopic video quality metrics. Performance evaluations using a large scale
database of stereoscopic videos with various types of distortions demonstrated
that using saliency maps generally improves the performance of the quality
assessment task for stereoscopic video. However, depending on the type of
distortion, utilized metric, and VAM, the amount of improvement will change.
|
electrics
|
3,348 |
A Perceptual Based Motion Compensation Technique for Video Coding
|
eess.IV
|
Motion estimation is one of the important procedures in the all video
encoders. Most of the complexity of the video coder depends on the complexity
of the motion estimation step. The original motion estimation algorithm has a
remarkable complexity and therefore many improvements were proposed to enhance
the crude version of the motion estimation. The basic idea of many of these
works were to optimize some distortion function for mean squared error (MSE) or
sum of absolute difference (SAD) in block matching But it is shown that these
metrics do not conclude the quality as it is, on the other hand, they are not
compatible with the human visual system (HVS). In this paper we explored the
usage of the image quality metrics in the video coding and more specific in the
motion estimation. We have utilized the perceptual image quality metrics
instead of MSE or SAD in the block based motion estimation. Three different
metrics have used: structural similarity or SSIM, complex wavelet structural
similarity or CW-SSIM, visual information fidelity or VIF. Experimental results
showed that usage of the quality criterions can improve the compression rate
while the quality remains fix and thus better quality in coded video at the
same bit budget.
|
electrics
|
3,349 |
Exploring the Distributed Video Coding in a Quality Assessment Context
|
eess.IV
|
In the popular video coding trend, the encoder has the task to exploit both
spatial and temporal redundancies present in the video sequence, which is a
complex procedure. As a result almost all video encoders have five to ten times
more complexity than their decoders. In a video compression process, one of the
main tasks at the encoder side is motion estimation which is to extract the
temporal correlation between frames. Distributed video coding (DVC) proposed
the idea that can lead to low complexity encoders and higher complexity
decoders. DVC is a new paradigm in video compression based on the information
theoretic ideas of Slepian-Wolf and Wyner-Ziv theorems. Wyner-Ziv coding is
naturally robust against transmission errors and can be used for joint source
and channel coding. Side Information is one of the key components of the
Wyner-Ziv decoder. Better side information generation will result in better
functionality of Wyner-Ziv coder. In this paper we proposed a new method that
can generate side information with a better quality and thus better
compression. We have used HVS (human visual system) based image quality metrics
as our quality criterion. The motion estimation we used in the decoder is
modified due to these metrics such that we could obtain finer side information.
The motion compensation is optimized for perceptual quality metrics and leads
to better side information generation compared to con- ventional MSE (mean
squared error) or SAD (sum of absolute difference) based motion compensation
currently used in the literature. Better motion compensation means better
compression.
|
electrics
|
3,350 |
Cubic Spline Interpolation Segmenting over Conventional Segmentation Procedures: Application and Advantages
|
eess.IV
|
To design a novel method for segmenting the image using Cubic Spline
Interpolation and compare it with different techniques to determine which gives
an efficient data to segment an image. This paper compares polynomial least
square interpolation and the conventional Otsu thresholding with spline
interpolation technique for image segmentation. The threshold value is
determined using the above-mentioned techniques which are then used to segment
an image into the binary image. The results of the proposed technique are also
compared with the conventional algorithms after applying image equalizations.
The better technique is determined based on the deviation and mean square error
when compared with an accurately segmented image. The image with least amount
of deviation and mean square error is declared as the better technique.
|
electrics
|
3,351 |
A Human Visual System-Based 3D Video Quality Metric
|
eess.IV
|
Although several 2D quality metrics have been proposed for images and videos,
in the case of 3D efforts are only at the initial stages. In this paper, we
propose a new full-reference quality metric for 3D content. Our method is
modeled around the HVS, fusing the information of both left and right channels,
considering color components, the cyclopean views of the two videos and
disparity. Performance evaluations showed that our 3D quality metric
successfully monitors the degradation of quality caused by several
representative types of distortion and it has 86% correlation with the results
of subjective evaluations.
|
electrics
|
3,352 |
3D Video Quality Metric for 3D Video Compression
|
eess.IV
|
As the evolution of multiview display technology is bringing glasses-free
3DTV closer to reality, MPEG and VCEG are preparing an extension to HEVC to
encode multiview video content. View synthesis in the current version of the 3D
video codec is performed using PSNR as a quality metric measure. In this paper,
we propose a full- reference Human-Visual-System based 3D video quality metric
to be used in multiview encoding as an alternative to PSNR. Performance of our
metric is tested in a 2-view case scenario. The quality of the compressed
stereo pair, formed from a decoded view and a synthesized view, is evaluated at
the encoder side. The performance is verified through a series of subjective
tests and compared with that of PSNR, SSIM, MS-SSIM, VIFp, and VQM metrics.
Experimental results showed that our 3D quality metric has the highest
correlation with Mean Opinion Scores (MOS) compared to the other tested
metrics.
|
electrics
|
3,353 |
Effect of High Frame Rates on 3D Video Quality of Experience
|
eess.IV
|
In this paper, we study the effect of 3D videos with increased frame rates on
the viewers quality of experience. We performed a series of subjective tests to
seek the subjects preferences among videos of the same scene at four different
frame rates: 24, 30, 48, and 60 frames per second (fps). Results revealed that
subjects clearly prefer higher frame rates. In particular, Mean Opinion Score
(MOS) values associated with the 60 fps 3D videos were 55% greater than MOS
values of the 24 fps 3D videos.
|
electrics
|
3,354 |
Evaluating the Performance of Existing Full-Reference Quality Metrics on High Dynamic Range (HDR) Video Content
|
eess.IV
|
While there exists a wide variety of Low Dynamic Range (LDR) quality metrics,
only a limited number of metrics are designed specifically for the High Dynamic
Range (HDR) content. With the introduction of HDR video compression
standardization effort by international standardization bodies, the need for an
efficient video quality metric for HDR applications has become more pronounced.
The objective of this study is to compare the performance of the existing
full-reference LDR and HDR video quality metrics on HDR content and identify
the most effective one for HDR applications. To this end, a new HDR video
dataset is created, which consists of representative indoor and outdoor video
sequences with different brightness, motion levels and different representing
types of distortions. The quality of each distorted video in this dataset is
evaluated both subjectively and objectively. The correlation between the
subjective and objective results confirm that VIF quality metric outperforms
all to ther tested metrics in the presence of the tested types of distortions.
|
electrics
|
3,355 |
Compression of High Dynamic Range Video Using the HEVC and H.264/AVC Standards
|
eess.IV
|
The existing video coding standards such as H.264/AVC and High Efficiency
Video Coding (HEVC) have been designed based on the statistical properties of
Low Dynamic Range (LDR) videos and are not accustomed to the characteristics of
High Dynamic Range (HDR) content. In this study, we investigate the performance
of the latest LDR video compression standard, HEVC, as well as the recent
widely commercially used video compression standard, H.264/AVC, on HDR content.
Subjective evaluations of results on an HDR display show that viewers clearly
prefer the videos coded via an HEVC-based encoder to the ones encoded using an
H.264/AVC encoder. In particular, HEVC outperforms H.264/AVC by an average of
10.18% in terms of mean opinion score and 25.08% in terms of bit rate savings.
|
electrics
|
3,356 |
The Effect of Frame Rate on 3D Video Quality and Bitrate
|
eess.IV
|
Increasing the frame rate of a 3D video generally results in improved Quality
of Experience (QoE). However, higher frame rates involve a higher degree of
complexity in capturing, transmission, storage, and display. The question that
arises here is what frame rate guarantees high viewing quality of experience
given the existing/required 3D devices and technologies (3D cameras, 3D TVs,
compression, transmission bandwidth, and storage capacity). This question has
already been addressed for the case of 2D video, but not for 3D. The objective
of this paper is to study the relationship between 3D quality and bitrate at
different frame rates. Our performance evaluations show that increasing the
frame rate of 3D videos beyond 60 fps may not be visually distinguishable. In
addition, our experiments show that when the available bandwidth is reduced,
the highest possible 3D quality of experience can be achieved by adjusting
(decreasing) the frame rate instead of increasing the compression ratio. The
results of our study are of particular interest to network providers for rate
adaptation in variable bitrate channels.
|
electrics
|
3,357 |
An Efficient Human Visual System Based Quality Metric for 3D Video
|
eess.IV
|
Stereoscopic video technologies have been introduced to the consumer market
in the past few years. A key factor in designing a 3D system is to understand
how different visual cues and distortions affect the perceptual quality of
stereoscopic video. The ultimate way to assess 3D video quality is through
subjective tests. However, subjective evaluation is time consuming, expensive,
and in some cases not possible. The other solution is developing objective
quality metrics, which attempt to model the Human Visual System (HVS) in order
to assess perceptual quality. Although several 2D quality metrics have been
proposed for still images and videos, in the case of 3D efforts are only at the
initial stages. In this paper, we propose a new full-reference quality metric
for 3D content. Our method mimics HVS by fusing information of both the left
and right views to construct the cyclopean view, as well as taking to account
the sensitivity of HVS to contrast and the disparity of the views. In addition,
a temporal pooling strategy is utilized to address the effect of temporal
variations of the quality in the video. Performance evaluations showed that our
3D quality metric quantifies quality degradation caused by several
representative types of distortions very accurately, with Pearson correlation
coefficient of 90.8 %, a competitive performance compared to the
state-of-the-art 3D quality metrics.
|
electrics
|
3,358 |
Benchmark 3D eye-tracking dataset for visual saliency prediction on stereoscopic 3D video
|
eess.IV
|
Visual Attention Models (VAMs) predict the location of an image or video
regions that are most likely to attract human attention. Although saliency
detection is well explored for 2D image and video content, there are only few
attempts made to design 3D saliency prediction models. Newly proposed 3D visual
attention models have to be validated over large-scale video saliency
prediction datasets, which also contain results of eye-tracking information.
There are several publicly available eye-tracking datasets for 2D image and
video content. In the case of 3D, however, there is still a need for
large-scale video saliency datasets for the research community for validating
different 3D-VAMs. In this paper, we introduce a large-scale dataset containing
eye-tracking data collected from 61 stereoscopic 3D videos (and also 2D
versions of those) and 24 subjects participated in a free-viewing test. We
evaluate the performance of the existing saliency detection methods over the
proposed dataset. In addition, we created an online benchmark for validating
the performance of the existing 2D and 3D visual attention models and
facilitate addition of new VAMs to the benchmark. Our benchmark currently
contains 50 different VAMs.
|
electrics
|
3,359 |
Introducing A Public Stereoscopic 3D High Dynamic Range (SHDR) Video Database
|
eess.IV
|
High Dynamic Range (HDR) displays and cameras are paving their ways through
the consumer market at a rapid growth rate. Thanks to TV and camera
manufacturers, HDR systems are now becoming available commercially to end
users. This is taking place only a few years after the blooming of 3D video
technologies. MPEG/ITU are also actively working towards the standardization of
these technologies. However, preliminary research efforts in these video
technologies are hammered by the lack of sufficient experimental data. In this
paper, we introduce a Stereoscopic 3D HDR (SHDR) database of videos that is
made publicly available to the research community. We explain the procedure
taken to capture, calibrate, and post-process the videos. In addition, we
provide insights on potential use-cases, challenges, and research
opportunities, implied by the combination of higher dynamic range of the HDR
aspect, and depth impression of the 3D aspect.
|
electrics
|
3,360 |
A Study on the Relationship Between Depth Map Quality and the Overall 3D Video Quality OF Experience
|
eess.IV
|
The emergence of multiview displays has made the need for synthesizing
virtual views more pronounced, since it is not practical to capture all of the
possible views when filming multiview content. View synthesis is performed
using the available views and depth maps. There is a correlation between the
quality of the synthesized views and the quality of depth maps. In this paper
we study the effect of depth map quality on perceptual quality of synthesized
view through subjective and objective analysis. Our evaluation results show
that: 1) 3D video quality depends highly on the depth map quality and 2) the
Visual Information Fidelity index computed between the reference and distorted
depth maps has Pearson correlation ratio of 0.75 and Spearman rank order
correlation coefficient of 0.67 with the subjective 3D video quality.
|
electrics
|
3,361 |
Potential quality improvement of stochastic optical localization nanoscopy images obtained by frame by frame localization algorithms
|
eess.IV
|
A data movie of stochastic optical localization nanoscopy contains spatial
and temporal correlations, both providing information of emitter locations. The
majority of localization algorithms in the literature estimate emitter
locations by frame-by-frame localization (FFL), which exploit only the spatial
correlation and leave the temporal correlation into the FFL nanoscopy images.
The temporal correlation contained in the FFL images, if exploited, can improve
the localization accuracy and the image quality. In this paper, we analyze the
properties of the FFL images in terms of root mean square minimum distance
(RMSMD) and root mean square error (RMSE). It is shown that RMSMD and RMSE can
be potentially reduced by a maximum fold equal to the square root of the
average number of activations per emitter. Analyzed and revealed are also
several statistical properties of RMSMD and RMSE and their relationship with
respect to a large number of data frames, bias and variance of localization
errors, small localization errors, sample drift, and the worst FFL image.
Numerical examples are taken and the results confirm the prediction of
analysis. The ideas about how to develop an algorithm to exploit the temporal
correlation of FFL images are also briefly discussed. The results suggest
development of two kinds of localization algorithms: the algorithms that can
exploit the temporal correlation of FFL images and the unbiased localization
algorithms.
|
electrics
|
3,362 |
ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 - JCT3V-C0032: A human visual system based 3D video quality metric
|
eess.IV
|
This contribution proposes a full-reference Human-Visual-System based 3D
video quality metric. In this report, the presented metric is used to evaluate
the quality of compressed stereo pair formed from a decoded view and a
synthesized view. The performance of the proposed metric is verified through a
series of subjective tests and compared with that of PSNR, SSIM, MS-SSIM, VIFp,
and VQM metrics. The experimental results show that HV3D has the highest
correlation with Mean Opinion Scores (MOS) compared to other tested metrics.
|
electrics
|
3,363 |
ISO/IEC JTC1/SC29/WG11 MPEG2014/ m34661: Quality Assessment of High Dynamic Range (HDR) Video Content Using Existing Full-Reference Metrics
|
eess.IV
|
The main focus of this document is to evaluate the performance of the
existing LDR and HDR metrics on HDR video content which in turn will allow for
a better understanding of how well each of these metrics work and if they can
be applied in capturing, compressing, transmitting process of HDR data. To this
end a series of subjective tests is performed to evaluate the quality of
DML-HDR video database [1], when several different representing types of
artifacts are present using a HDR display. Then, the correlation between the
results from the existing LDR and HDR quality metrics and those from subjective
tests is measured to determine the most effective exiting quality metric for
HDR.
|
electrics
|
3,364 |
Quantitative Susceptibility Mapping using Deep Neural Network: QSMnet
|
eess.IV
|
Deep neural networks have demonstrated promising potential for the field of
medical image reconstruction. In this work, an MRI reconstruction algorithm,
which is referred to as quantitative susceptibility mapping (QSM), has been
developed using a deep neural network in order to perform dipole deconvolution,
which restores magnetic susceptibility source from an MRI field map. Previous
approaches of QSM require multiple orientation data (e.g. Calculation of
Susceptibility through Multiple Orientation Sampling or COSMOS) or
regularization terms (e.g. Truncated K-space Division or TKD; Morphology
Enabled Dipole Inversion or MEDI) to solve the ill-conditioned deconvolution
problem. Unfortunately, they either require long multiple orientation scans or
suffer from artifacts. To overcome these shortcomings, a deep neural network,
QSMnet, is constructed to generate a high quality susceptibility map from
single orientation data. The network has a modified U-net structure and is
trained using gold-standard COSMOS QSM maps. 25 datasets from 5 subjects (5
orientation each) were applied for patch-wise training after doubling the data
using augmentation. Two additional datasets of 5 orientation data were used for
validation and test (one dataset each). The QSMnet maps of the test dataset
were compared with those from TKD and MEDI for image quality and consistency in
multiple head orientations. Quantitative and qualitative image quality
comparisons demonstrate that the QSMnet results have superior image quality to
those of TKD or MEDI and have comparable image quality to those of COSMOS.
Additionally, QSMnet maps reveal substantially better consistency across the
multiple orientations than those from TKD or MEDI. As a preliminary
application, the network was tested for two patients. The QSMnet maps showed
similar lesion contrasts with those from MEDI, demonstrating potential for
future applications.
|
electrics
|
3,365 |
3D Video Quality Metric for Mobile Applications
|
eess.IV
|
In this paper, we propose a new full-reference quality metric for mobile 3D
content. Our method is modeled around the Human Visual System, fusing the
information of both left and right channels, considering color components, the
cyclopean views of the two videos and disparity. Our method is assessing the
quality of 3D videos displayed on a mobile 3DTV, taking into account the effect
of resolution, distance from the viewers eyes, and dimensions of the mobile
display. Performance evaluations showed that our mobile 3D quality metric
monitors the degradation of quality caused by several representative types of
distortion with 82 percent correlation with results of subjective tests, an
accuracy much better than that of the state of the art mobile 3D quality
metric.
|
electrics
|
3,366 |
Collaborative Sparse Priors for Infrared Image Multi-view ATR
|
eess.IV
|
Feature extraction from infrared (IR) images remains a challenging task.
Learning based methods that can work on raw imagery/patches have therefore
assumed significance. We propose a novel multi-task extension of the widely
used sparse-representation-classification (SRC) method in both single and
multi-view set-ups. That is, the test sample could be a single IR image or
images from different views. When expanded in terms of a training dictionary,
the coefficient matrix in a multi-view scenario admits a sparse structure that
is not easily captured by traditional sparsity-inducing measures such as the
$l_0$-row pseudo norm. To that end, we employ collaborative spike and slab
priors on the coefficient matrix, which can capture fairly general sparse
structures. Our work involves joint parameter and sparse coefficient estimation
(JPCEM) which alleviates the need to handpick prior parameters before
classification. The experimental merits of JPCEM are substantiated through
comparisons with other state-of-art methods on a challenging mid-wave IR image
(MWIR) ATR database made available by the US Army Night Vision and Electronic
Sensors Directorate.
|
electrics
|
3,367 |
A Unifying Decomposition and Reconstruction Model for Discrete Signals
|
eess.IV
|
Decomposing discrete signals such as images into components is vital in many
applications, and this paper propose a framework to produce filtering banks to
accomplish this task. The framework is an equation set which is ill-posed, and
thus have many solutions. Each solution can form a filtering bank consisting of
two decomposition filters, and two reconstruction filters. Especially, many
existing discrete wavelet filtering banks are special cases of the framework,
and thus the framework actually makes the different wavelet filtering banks
unifiedly presented. Moreover, additional constraints can impose on the
framework to make it well-posed, meaning that decomposition and reconstruction
(D&R) can consider the practical requirements, not like existing discrete
wavelet filtering banks whose coefficients are fixed. All the filtering banks
produced by the framework can behave excellently, have many decomposition
effect and precise reconstruction accuracy, and this has been theoretically
proved and been confirmed by a large number experimental results.
|
electrics
|
3,368 |
Target detection in synthetic aperture radar imagery: a state-of-the-art survey
|
eess.IV
|
Target detection is the front-end stage in any automatic target recognition
system for synthetic aperture radar (SAR) imagery (SAR-ATR). The efficacy of
the detector directly impacts the succeeding stages in the SAR-ATR processing
chain. There are numerous methods reported in the literature for implementing
the detector. We offer an umbrella under which the various research activities
in the field are broadly probed and taxonomized. First, a taxonomy for the
various detection methods is proposed. Second, the underlying assumptions for
different implementation strategies are overviewed. Third, a tabular comparison
between careful selections of representative examples is introduced. Finally, a
novel discussion is presented, wherein the issues covered include suitability
of SAR data models, understanding the multiplicative SAR data models, and two
unique perspectives on constant false alarm rate (CFAR) detection: signal
processing and pattern recognition. From a signal processing perspective, CFAR
is shown to be a finite impulse response band-pass filter. From a statistical
pattern recognition perspective, CFAR is shown to be a suboptimal one-class
classifier: a Euclidian distance classifier and a quadratic discriminant with a
missing term for one-parameter and two-parameter CFAR, respectively. We make a
contribution toward enabling an objective design and implementation for target
detection in SAR imagery.
|
electrics
|
3,369 |
Joint Bilateral Filter for Signal Recovery from Phase Preserved Curvelet Coefficients for Image Denoising
|
eess.IV
|
Thresholding of Curvelet Coefficients, for image denoising, drains out subtle
signal component in noise subspace. This produces ringing artifacts near edges
and granular effect in the denoised image. We found the noise sensitivity of
Curvelet phases (in contrast to their magnitude) reduces with higher noise
level. Thus, we preserved the phase of the coefficients below threshold at
coarser scale and estimated their magnitude by Joint Bilateral Filtering (JBF)
technique from the thresholded and noisy coefficients. In the finest scale, we
apply Bilateral Filter (BF) to keep edge information. Further, the Guided Image
Filter (GIF) is applied on the reconstructed image to localize the edges and to
preserve the small image details and textures. The lower noise sensitivity of
Curvelet phase at higher noise strength accelerate the performance of proposed
method over several state-of-theart techniques and provides comparable outcome
at lower noise levels.
|
electrics
|
3,370 |
Learning an Inverse Tone Mapping Network with a Generative Adversarial Regularizer
|
eess.IV
|
Transferring a low-dynamic-range (LDR) image to a high-dynamic-range (HDR)
image, which is the so-called inverse tone mapping (iTM), is an important
imaging technique to improve visual effects of imaging devices. In this paper,
we propose a novel deep learning-based iTM method, which learns an inverse tone
mapping network with a generative adversarial regularizer. In the framework of
alternating optimization, we learn a U-Net-based HDR image generator to
transfer input LDR images to HDR ones, and a simple CNN-based discriminator to
classify the real HDR images and the generated ones. Specifically, when
learning the generator we consider the content-related loss and the generative
adversarial regularizer jointly to improve the stability and the robustness of
the generated HDR images. Using the learned generator as the proposed inverse
tone mapping network, we achieve superior iTM results to the state-of-the-art
methods consistently.
|
electrics
|
3,371 |
AV1 Video Coding Using Texture Analysis With Convolutional Neural Networks
|
eess.IV
|
Modern video codecs including the newly developed AOM/AV1 utilize hybrid
coding techniques to remove spatial and temporal redundancy. However, efficient
exploitation of statistical dependencies measured by a mean squared error (MSE)
does not always produce the best psychovisual result. One interesting approach
is to only encode visually relevant information and use a different coding
method for "perceptually insignificant" regions in the frame, which can lead to
substantial data rate reductions while maintaining visual quality. In this
paper, we introduce a texture analyzer before encoding the input sequences to
identify detail irrelevant texture regions in the frame using convolutional
neural networks. We designed and developed a new coding tool referred to as
texture mode for AV1, where if texture mode is selected at the encoder, no
inter-frame prediction is performed for the identified texture regions.
Instead, displacement of the entire region is modeled by just one set of motion
parameters. Therefore, only the model parameters are transmitted to the decoder
for reconstructing the texture regions. Non-texture regions in the frame are
coded conventionally. We show that for many standard test sets, the proposed
method achieved significant data rate reductions.
|
electrics
|
3,372 |
An initial exploration of vicarious and in-scene calibration techniques for small unmanned aircraft systems
|
eess.IV
|
The use of small unmanned aircraft systems (sUAS) for applications in the
field of precision agriculture has demonstrated the need to produce temporally
consistent imagery to allow for quantitative comparisons. In order for these
aerial images to be used to identify actual changes on the ground, conversion
of raw digital count to reflectance, or to an atmospherically normalized space,
needs to be carried out. This paper will describe an experiment that compares
the use of reflectance calibration panels, for use with the empirical line
method (ELM), against a newly proposed ratio of the target radiance and the
downwelling radiance, to predict the reflectance of known targets in the scene.
We propose that the use of an on-board downwelling light sensor (DLS) may
provide the sUAS remote sensing practitioner with an approach that does not
require the expensive and time consuming task of placing known reflectance
standards in the scene. Three calibration methods were tested in this study:
2-Point ELM, 1-Point ELM, and At-altitude Radiance Ratio (AARR). Our study
indicates that the traditional 2-Point ELM produces the lowest mean error in
band effective reflectance factor, 0.0165. The 1-Point ELM and AARR produce
mean errors of 0.0343 and 0.0287 respectively. A modeling of the proposed AARR
approach indicates that the technique has the potential to perform better than
the 2-Point ELM method, with a 0.0026 mean error in band effective reflectance
factor, indicating that this newly proposed technique may prove to be a viable
alternative with suitable on-board sensors.
|
electrics
|
3,373 |
Real-time single-pixel video imaging with Fourier domain regularization
|
eess.IV
|
We present a closed-form image reconstruction method for single pixel imaging
based on the generalized inverse of the measurement matrix. Its numerical cost
scales linearly with the number of measured samples. Regularization is obtained
by minimizing the norms of the convolution between the reconstructed image and
a set of spatial filters, and the final reconstruction formula can be expressed
in terms of matrix pseudoinverse. At high compression this approach is an
interesting alternative to the methods of compressive sensing based on l1-norm
optimization, which are too slow for real-time applications. For instance, we
demonstrate experimental single-pixel detection with real-time reconstruction
obtained in parallel with the measurement at the frame rate of $11$ Hz for
highly compressive measurements with the resolution of $256\times 256$. For
this purpose, we preselect the sampling functions to match the average spectrum
obtained with an image database. The sampling functions are selected from the
Walsh-Hadamard basis, from the discrete cosine basis, or from a subset of
Morlet wavelets convolved with white noise. We show that by incorporating the
quadratic criterion into the closed-form reconstruction formula, we are able to
use binary rather than continuous sampling reaching similar reconstruction
quality as is obtained by minimizing the total variation. This makes it
possible to use cosine or Morlet-based sampling with digital micromirror
devices without advanced binarization methods.
|
electrics
|
3,374 |
A fast and accurate basis pursuit denoising algorithm with application to super-resolving tomographic SAR
|
eess.IV
|
$L_1$ regularization is used for finding sparse solutions to an
underdetermined linear system. As sparse signals are widely expected in remote
sensing, this type of regularization scheme and its extensions have been widely
employed in many remote sensing problems, such as image fusion, target
detection, image super-resolution, and others and have led to promising
results. However, solving such sparse reconstruction problems is
computationally expensive and has limitations in its practical use. In this
paper, we proposed a novel efficient algorithm for solving the complex-valued
$L_1$ regularized least squares problem. Taking the high-dimensional
tomographic synthetic aperture radar (TomoSAR) as a practical example, we
carried out extensive experiments, both with simulation data and real data, to
demonstrate that the proposed approach can retain the accuracy of second order
methods while dramatically speeding up the processing by one or two orders.
Although we have chosen TomoSAR as the example, the proposed method can be
generally applied to any spectral estimation problems.
|
electrics
|
3,375 |
Conditional Entropy as a Supervised Primitive Segmentation Loss Function
|
eess.IV
|
Supervised image segmentation assigns image voxels to a set of labels, as
defined by a specific labeling protocol. In this paper, we decompose
segmentation into two steps. The first step is what we call "primitive
segmentation", where voxels that form sub-parts (primitives) of the various
segmentation labels available in the training data, are grouped together. The
second step involves computing a protocol-specific label map based on the
primitive segmentation. Our core contribution is a novel loss function for the
first step, where a primitive segmentation model is trained. The proposed loss
function is the entropy of the (protocol-specific) "ground truth" label map
conditioned on the primitive segmentation. The conditional entropy loss enables
combining training datasets that have been manually labeled with different
protocols. Furthermore, as we show empirically, it facilitates an efficient
strategy for transfer learning via a lightweight protocol adaptation model that
can be trained with little manually labeled data. We apply the proposed
approach to the volumetric segmentation of brain MRI scans, where we achieve
promising results.
|
electrics
|
3,376 |
Adaptive structured low rank algorithm for MR image recovery
|
eess.IV
|
We introduce an adaptive structured low rank algorithm to recover MR images
from their undersampled Fourier coefficients. The image is modeled as a
combination of a piecewise constant component and a piecewise linear component.
The Fourier coefficients of each component satisfy an annihilation relation,
which results in a structured Toeplitz matrix. We exploit the low rank property
of the matrices to formulate a combined regularized optimization problem, which
can be solved efficiently. Numerical experiments indicate that the proposed
algorithm provides improved recovery performance over the previously proposed
algorithms.
|
electrics
|
3,377 |
Comparing LBP, HOG and Deep Features for Classification of Histopathology Images
|
eess.IV
|
Medical image analysis has become a topic under the spotlight in recent
years. There is a significant progress in medical image research concerning the
usage of machine learning. However, there are still numerous questions and
problems awaiting answers and solutions, respectively. In the present study,
comparison of three classification models is conducted using features extracted
using local binary patterns, the histogram of gradients, and a pre-trained deep
network. Three common image classification methods, including support vector
machines, decision trees, and artificial neural networks are used to classify
feature vectors obtained by different feature extractors. We use KIMIA Path960,
a publicly available dataset of $960$ histopathology images extracted from $20$
different tissue scans to test the accuracy of classification and feature
extractions models used in the study, specifically for the histopathology
images. SVM achieves the highest accuracy of $90.52\%$ using local binary
patterns as features which surpasses the accuracy obtained by deep features,
namely $81.14\%$.
|
electrics
|
3,378 |
A new focal-plane 3D imaging method based on temporal ghost imaging
|
eess.IV
|
A new focal-plane three-dimensional (3D) imaging method based on temporal
ghost imaging is proposed and demonstrated. By exploiting the advantages of
temporal ghost imaging, this method enables slow integrating cameras have an
ability of 3D surface imaging in the framework of sequential flood-illumination
and focal-plane detection. The depth information of 3D objects is easily lost
when imaging with traditional cameras, but it can be reconstructed with
high-resolution by temporal correlation between received signals and reference
signals. Combining with a two-dimensional (2D) projection image obtained by one
single shot, a 3D image of the object can be achieved. The feasibility and
performance of this focal-plane 3D imaging method have been verified through
theoretical analysis and numerical experiments in this paper.
|
electrics
|
3,379 |
Global Ultrasound Elastography Using Convolutional Neural Network
|
eess.IV
|
Displacement estimation is very important in ultrasound elastography and
failing to estimate displacement correctly results in failure in generating
strain images. As conventional ultrasound elastography techniques suffer from
decorrelation noise, they are prone to fail in estimating displacement between
echo signals obtained during tissue distortions. This study proposes a novel
elastography technique which addresses the decorrelation in estimating
displacement field. We call our method GLUENet (GLobal Ultrasound Elastography
Network) which uses deep Convolutional Neural Network (CNN) to get a coarse
time-delay estimation between two ultrasound images. This displacement is later
used for formulating a nonlinear cost function which incorporates similarity of
RF data intensity and prior information of estimated displacement. By
optimizing this cost function, we calculate the finer displacement by
exploiting all the information of all the samples of RF data simultaneously.
The Contrast to Noise Ratio (CNR) and Signal to Noise Ratio (SNR) of the strain
images from our technique is very much close to that of strain images from
GLUE. While most elastography algorithms are sensitive to parameter tuning, our
robust algorithm is substantially less sensitive to parameter tuning.
|
electrics
|
3,380 |
Comments on "Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform"
|
eess.IV
|
The recently introduced coder based on region-adaptive hierarchical transform
(RAHT) for the compression of point clouds attributes, was shown to have a
performance competitive with the state-of-the-art, while being much less
complex. In the paper "Compression of 3D Point Clouds Using a Region-Adaptive
Hierarchical Transform", top performance was achieved using arithmetic coding
(AC), while adaptive run-length Golomb-Rice (RLGR) coding was presented as a
lower-performance lower-complexity alternative. However, we have found that by
reordering the RAHT coefficients we can largely increase the runs of zeros and
significantly increase the performance of the RLGR-based RAHT coder. As a
result, the new coder, using ordered coefficients, was shown to outperform all
other coders, including AC-based RAHT, at an even lower computational cost. We
present new results and plots that should enhance those in the work of Queiroz
and Chou to include the new results for RLGR-RAHT. We risk to say, based on the
results herein, that RLGR-RAHT with sorted coefficients is the new
state-of-the-art in point cloud compression.
|
electrics
|
3,381 |
X-ray tomography of extended objects: a comparison of data acquisition approaches
|
eess.IV
|
The penetration power of x-rays allows one to image large objects. For
example, centimeter-sized specimens can be imaged with micron-level resolution
using synchrotron sources. In this case, however, the limited beam diameter and
detector size preclude the acquisition of the full sample in a single take,
necessitating strategies for combining data from multiple regions. Object
stitching involves the combination of local tomography data from overlapping
regions, while projection stitching involves the collection of projections at
multiple offset positions from the rotation axis followed by data merging and
reconstruction. We compare these two approaches in terms of radiation dose
applied to the specimen, and reconstructed image quality. Object stitching
involves an easier data alignment problem, and immediate viewing of subregions
before the entire dataset has been acquired. Projection stitching is more
dose-efficient, and avoids certain artifacts of local tomography; however, it
also involves a more difficult data assembly and alignment procedure, in that
it is more sensitive to accumulative registration error.
|
electrics
|
3,382 |
Hyperspectral Image Unmixing with Endmember Bundles and Group Sparsity Inducing Mixed Norms
|
eess.IV
|
Hyperspectral images provide much more information than conventional imaging
techniques, allowing a precise identification of the materials in the observed
scene, but because of the limited spatial resolution, the observations are
usually mixtures of the contributions of several materials. The spectral
unmixing problem aims at recovering the spectra of the pure materials of the
scene (endmembers), along with their proportions (abundances) in each pixel. In
order to deal with the intra-class variability of the materials and the induced
spectral variability of the endmembers, several spectra per material,
constituting endmember bundles, can be considered. However, the usual abundance
estimation techniques do not take advantage of the particular structure of
these bundles, organized into groups of spectra. In this paper, we propose to
use group sparsity by introducing mixed norms in the abundance estimation
optimization problem. In particular, we propose a new penalty which
simultaneously enforces group and within group sparsity, to the cost of being
nonconvex. All the proposed penalties are compatible with the abundance
sum-to-one constraint, which is not the case with traditional sparse
regression. We show on simulated and real datasets that well chosen penalties
can significantly improve the unmixing performance compared to the naive bundle
approach.
|
electrics
|
3,383 |
A Count of Palm Trees from Satellite Image
|
eess.IV
|
In this research the number of palm trees was calculated from the satellite
image programmatically, taking advantage of the accuracy of the spatial
resolution of satellite image, the abilities of software recognition, and
characteristics of the palm tree, which give it a systematic top view can be
distinguished from the satellite image and the manner of cultivation and
vertical growth and stability form for long periods of time. While other trees
are irregular in shape mostly because of their twisted branches. Palm trees
consist of a long stem, a large head, and a large flare that is almost circular
and consists of large tufts. The palms have large self-shadows other than
ordinary leaves. The large shadows and the circular shape of the upper view
give it a special feature that we could use to design a program that
distinguishes the shape of the palm without all the trees. Then it counts the
number of palms in any field shown in the satellite image. This method is
useful in counting the number of palm trees for commercial, agricultural or
environmental purposes. It is also can be applied to high-resolution satellite
imagery such as QuickBird because the resolution of the images is 0.6 meters.
Less accurate images such as the 10-meter SPOT do not show the interior shadows
of the top view of the palm enough, nor the accurate satellites (5 meters),
while the interior shadows appear in high-resolution images only (0.6 meters)
or below. It can also be applied to aerial images of less capacity because they
are more accurate of course. Satellite images can be obtained free from Google
Earth explorer, which can be downloaded free from the Google website. It
connects the user to a global database of high-resolution images for all
regions of the world.
|
electrics
|
3,384 |
Robust Real-time Ellipse Fitting Based on Lagrange Programming Neural Network and Locally Competitive Algorithm
|
eess.IV
|
Given a set of 2-dimensional (2-D) scattering points, which are usually
obtained from the edge detection process, the aim of ellipse fitting is to
construct an elliptic equation that best fits the collected observations.
However, some of the scattering points may contain outliers due to imperfect
edge detection. To address this issue, we devise a robust real-time ellipse
fitting approach based on two kinds of analog neural network, Lagrange
programming neural network (LPNN) and locally competitive algorithm (LCA).
First, to alleviate the influence of these outliers, the fitting task is
formulated as a nonsmooth constrained optimization problem in which the
objective function is either an l1-norm or l0-norm term. It is because compared
with the l2-norm in some traditional ellipse fitting models, the lp-norm with
p<2 is less sensitive to outliers. Then, to calculate a real-time solution of
this optimization problem, LPNN is applied. As the LPNN model cannot handle the
non-differentiable term in its objective, the concept of LCA is introduced and
combined with the LPNN framework. Simulation and experimental results show that
the proposed ellipse fitting approach is superior to several state-of-the-art
algorithms.
|
electrics
|
3,385 |
Learned Compression Artifact Removal by Deep Residual Networks
|
eess.IV
|
We propose a method for learned compression artifact removal by
post-processing of BPG compressed images. We trained three networks of
different sizes. We encoded input images using BPG with different QP values. We
submitted the best combination of test images, encoded with different QP and
post-processed by one of three networks, which satisfy the file size and decode
time constraints imposed by the Challenge. The selection of the best
combination is posed as an integer programming problem. Although the visual
improvements in image quality is impressive, the average PSNR improvement for
the results is about 0.5 dB.
|
electrics
|
3,386 |
A two-stage method for spectral-spatial classification of hyperspectral images
|
eess.IV
|
This paper proposes a novel two-stage method for the classification of
hyperspectral images. Pixel-wise classifiers, such as the classical support
vector machine (SVM), consider spectral information only; therefore they would
generate noisy classification results as spatial information is not utilized.
Many existing methods, such as morphological profiles, superpixel segmentation,
and composite kernels, exploit the spatial information too. In this paper, we
propose a two-stage approach to incorporate the spatial information. In the
first stage, an SVM is used to estimate the class probability for each pixel.
The resulting probability map for each class will be noisy. In the second
stage, a variational denoising method is used to restore these noisy
probability maps to get a good classification map. Our proposed method
effectively utilizes both spectral and spatial information of the hyperspectral
data sets. Experimental results on three widely used real hyperspectral data
sets indicate that our method is very competitive when compared with current
state-of-the-art methods, especially when the inter-class spectra are similar
or the percentage of the training pixels is high.
|
electrics
|
3,387 |
Hyperspectral Unmixing by Nuclear Norm Difference Maximization based Dictionary Pruning
|
eess.IV
|
Dictionary pruning methods perform unmixing by identifying a smaller subset
of active spectral library elements that can represent the image efficiently as
a linear combination. This paper presents a new nuclear norm difference based
approach for dictionary pruning utilizing the low rank property of
hyperspectral data. The proposed workflow calculates the nuclear norm of
abundance of the original data assuming the whole spectral library as
endmembers. In the next step, the algorithm calculates nuclear norm of
abundance after appending a spectral library element with the data. The
spectral library elements having the maximum difference in the nuclear norm of
the obtained abundance matrices are suitable candidates for being image
endmember. The proposed workflow is verified with a large number of synthetic
data generated by varying condition as well as some real images.
|
electrics
|
3,388 |
Light interception modelling using unstructured LiDAR data in avocado orchards
|
eess.IV
|
In commercial fruit farming, managing the light distribution through canopies
is important because the amount and distribution of solar energy that is
harvested by each tree impacts the production of fruit quantity and quality. It
is therefore an important characteristic to measure and ultimately to control
with pruning. We present a solar-geometric model to estimate light interception
in individual avocado (Persea americana) trees, that is designed to scale to
whole-orchard scanning, ultimately to inform pruning decisions. The geometry of
individual trees was measured using LiDAR and represented by point clouds. A
discrete energy distribution model of the hemispherical sky was synthesised
using public weather records. The light from each sky node was then ray traced,
applying a radiation absorption model where rays pass the point cloud
representation of the tree. The model was validated using ceptometer energy
measurements at the canopy floor, and model parameters were optimised by
analysing the error between modelled and measured energies. The model was shown
to perform well qualitatively well through visual comparison with tree shadows
in photographs, and quantitatively well with R^2 = 0.854, suggesting it is
suitable to use in the context of agricultural decision support systems, in
future work.
|
electrics
|
3,389 |
Pavement Crack Detection Based on Mobile Laser Scanning Data
|
eess.IV
|
Pavement cracks is one of the most important reasons that affects the road
capacity. Nowadays, China has the longest highway mileage in the world, thus
using traditional manual methods to detect pavement cracks is both time and
labor consuming, besides, the detection results are prone to be affected by
detectors, which is often subjective. Meanwhile, using digital image to detect
pavement cracks may be affected by illumination and shadows, which could
dramatically reduce the detection precision. Therefore, designing a new
detection method has important significance. This paper proposes a new method
of detecting pavement cracks using high density laser point cloud. High density
laser point cloud can be gathered through Vehicle-borne laser scanning system,
which integrates a variety types of sensors which include GNSS/INS,laser
scanner and cameras. It can automatically collect 3-D spatial information
around it in a high speed,it's one of the most advanced 3-D spatial information
acquisition technologies. The system is not affected by illumination while
gathering laser point cloud, besides, it gathers laser point cloud very fast,
which greatly improves the detection efficiency. The method proposed consists
of four parts, which are data preparation, image preprocessing, binarization
and crack enhancement. This method combines the advantages of digital image and
laser point cloud to solve the problem. High density laser point cloud are
first interpolated into georeferenced feature (GRF) image, then median filter,
morphology, local adaptive threshold and multi scale iterative tensor voting
method are used to detect pavement cracks from GRF image. At last, Hausdorff
distance is used to evaluate detection precision. The SM value reached around
95, indicates that pavement cracks are well detected and the method proposed
can serve the municipal departments well to detect pavement cracks.
|
electrics
|
3,390 |
Blind Ptychography: Uniqueness and Ambiguities
|
eess.IV
|
Ptychography with an unknown mask and object is analyzed for general
ptychographic measurement schemes that are strongly connected and possess an
anchor.
Under a mild constraint on the mask phase, it is proved that the masked
object estimate must be the product of a block phase factor and the true masked
object. This local uniqueness manifests itself in the phase drift equation that
determines the ambiguity at different locations connected by ptychographic
shifts.
The proposed mixing schemes effectively connects the ambiguity throughout the
whole domain such that a distinct ambiguity profile arises and consequently
possess the global uniqueness that the block phases have an affine profile and
that the object and mask can be simultaneously recovered up to a constant
scaling factor and an affine phase factor.
|
electrics
|
3,391 |
A Multi-task Network to Detect Junctions in Retinal Vasculature
|
eess.IV
|
Junctions in the retinal vasculature are key points to be able to extract its
topology, but they vary in appearance, depending on vessel density, width and
branching/crossing angles. The complexity of junction patterns is usually
accompanied by a scarcity of labels, which discourages the usage of very deep
networks for their detection. We propose a multi-task network, generating
labels for vessel interior, centerline, edges and junction patterns, to provide
additional information to facilitate junction detection. After the initial
detection of potential junctions in junction-selective probability maps,
candidate locations are re-examined in centerline probability maps to verify if
they connect at least 3 branches. The experiments on the DRIVE and IOSTAR
showed that our method outperformed a recent study in which a popular deep
network was trained as a classifier to find junctions. Moreover, the proposed
approach is applicable to unseen datasets with the same degree of success,
after training it only once.
|
electrics
|
3,392 |
High Performance Computing in Medical Image Analysis HuSSaR
|
eess.IV
|
In our former works we have made serious efforts to improve the performance
of medical image analysis methods with using ensemble-based systems. In this
paper, we present a novel hardware-based solution for the efficient adoption of
our complex, fusion-based approaches for real-time applications. Even though
most of the image processing problems and the increasing amount of data have
high-performance computing(HPC) demand, there is still a lack of corresponding
dedicated HPC solutions for several medical tasks. To widen this bottleneck we
have developed a Hybrid Small Size high performance computing Resource
(abbreviated by HuSSaR) which efficiently alloys CPU and GPU technologies,
mobile and has an own cooling system to support easy mobility and wide
applicability. Besides a proper technical description, we include several
practical examples from the clinical data processing domain in this work. For
more details see also:
https://arato.inf.unideb.hu/kovacs.laszlo/research_hybridmicrohpc.html
|
electrics
|
3,393 |
Comparative survey: People detection, tracking and multi-sensor Fusion in a video sequence
|
eess.IV
|
Tracking people in a video sequence is one of the fields of interest in
computer vision. It has broad applications in motion capture and surveillance.
However, due to the complexity of human dynamic structure, detecting and
tracking are not straightforward. Consequently, different detection and
tracking techniques with different applications and performance have been
developed. To minimize the noise between the prediction and measurement during
tracking, Kalman filter has been used as a filtering technique. At the same
time, in most cases, detection and tracking results from a single sensor is not
enough to detect and track a person. To avoid this problem, using a
multi-sensor fusion technique is indispensable. In this paper, a comparative
survey of detection, tracking and multi-sensor fusion methods are presented.
|
electrics
|
3,394 |
Sigmoid function based intensity transformation for parameter initialization in MRI-PET Registration Tool for Preclinical Studies
|
eess.IV
|
Images from Positron Emission Tomography (PET) deliver functional data such
as perfusion and metabolism. On the other hand, images from Magnetic Resonance
Imaging (MRI) provide information describing anatomical structures. Fusing the
complementary information from the two modalities is helpful in oncology. In
this project, we implemented a complete tool allowing semi-automatic MRI-PET
registration for small animal imaging in the preclinical studies. A two-stage
hierarchical registration approach is proposed. First, a global affine
registration is applied. For robust and fast registration, principal component
analysis (PCA) is used to compute the initial parameters for the global affine
registration. Since, only the low intensities in the PET volume reveal the
anatomic information on the MRI scan, we proposed a non-uniform intensity
transformation to the PET volume to enhance the contrast of the low intensity.
This helps to improve the computation of the centroid and principal axis by
increasing the contribution of the low intensities. Then, the globally
registered image is given as input to the second stage which is a local
deformable registration (B-spline registration). Mutual information is used as
metric function for the optimization. A multi-resolution approach is used in
both stages. The registration algorithm is supported by graphical user
interface (GUI) and visualization methods so that the user can interact easily
with the process. The performance of the registration algorithm is validated by
two medical experts on seven different datasets on abdominal and brain areas
including noisy and difficult image volumes.
|
electrics
|
3,395 |
A Convex Model for Edge-Histogram Specification with Applications to Edge-preserving Smoothing
|
eess.IV
|
The goal of edge-histogram specification is to find an image whose edge image
has a histogram that matches a given edge-histogram as much as possible.
Mignotte has proposed a non-convex model for the problem [M. Mignotte. An
energy-based model for the image edge-histogram specification problem. IEEE
Transactions on Image Processing, 21(1):379--386, 2012]. In his work, edge
magnitudes of an input image are first modified by histogram specification to
match the given edge-histogram. Then, a non-convex model is minimized to find
an output image whose edge-histogram matches the modified edge-histogram. The
non-convexity of the model hinders the computations and the inclusion of useful
constraints such as the dynamic range constraint. In this paper, instead of
considering edge magnitudes, we directly consider the image gradients and
propose a convex model based on them. Furthermore, we include additional
constraints in our model based on different applications. The convexity of our
model allows us to compute the output image efficiently using either
Alternating Direction Method of Multipliers or Fast Iterative
Shrinkage-Thresholding Algorithm. We consider several applications in
edge-preserving smoothing including image abstraction, edge extraction, details
exaggeration, and documents scan-through removal. Numerical results are given
to illustrate that our method successfully produces decent results efficiently.
|
electrics
|
3,396 |
Pansharpening via Detail Injection Based Convolutional Neural Networks
|
eess.IV
|
Pansharpening aims to fuse a multispectral (MS) image with an associated
panchromatic (PAN) image, producing a composite image with the spectral
resolution of the former and the spatial resolution of the latter. Traditional
pansharpening methods can be ascribed to a unified detail injection context,
which views the injected MS details as the integration of PAN details and
band-wise injection gains. In this work, we design a detail injection based CNN
(DiCNN) framework for pansharpening, with the MS details being directly
formulated in end-to-end manners, where the first detail injection based CNN
(DiCNN1) mines MS details through the PAN image and the MS image, and the
second one (DiCNN2) utilizes only the PAN image. The main advantage of the
proposed DiCNNs is that they provide explicit physical interpretations and can
achieve fast convergence while achieving high pansharpening quality.
Furthermore, the effectiveness of the proposed approaches is also analyzed from
a relatively theoretical point of view. Our methods are evaluated via
experiments on real-world MS image datasets, achieving excellent performance
when compared to other state-of-the-art methods.
|
electrics
|
3,397 |
Deterministic X-ray Bragg coherent diffraction imaging as a seed for subsequent iterative reconstruction
|
eess.IV
|
Coherent diffractive imaging (CDI), using both X-rays and electrons, has made
extremely rapid progress over the past two decades. The associated
reconstruction algorithms are typically iterative, and seeded with a crude
first estimate. A deterministic method for Bragg Coherent Diffraction Imaging
(Pavlov et al., Sci. Rep. 7, 1132 (2017)) is used as a more refined starting
point for a shrink-wrap iterative reconstruction procedure. The appropriate
comparison with the autocorrelation function as a starting point is performed.
Real-space and Fourier-space error metrics are used to analyse the convergence
of the reconstruction procedure for noisy and noise-free simulated data. Our
results suggest that the use of deterministic-CDI reconstructions, as a seed
for subsequent iterative-CDI refinement, may boost the speed and degree of
convergence compared to the cruder seeds that are currently commonly used. We
also highlight the utility of monitoring multiple error metrics in the context
of iterative refinement.
|
electrics
|
3,398 |
Combining Radon transform and Electrical Capacitance Tomography for a $2d+1$ imaging device
|
eess.IV
|
This paper describes a coplanar non invasive non destructive capacitive
imaging device. We first introduce a mathematical model for its output, and
discuss some of its theoretical capabilities. We show that the data obtained
from this device can be interpreted as a weighted Radon transform of the
electrical permittivity of the measured object near its surface. Image
reconstructions from experimental data provide good surface resolution as well
as short depth imaging, making the apparatus a $2d+1$ imager. The quality of
the images leads us to expect that excellent results can be delivered by
\emph{ad-hoc} optimized inversion formulas. There are also interesting, yet
unexplored, theoretical questions on imaging that this sensor will allow to
test.
|
electrics
|
3,399 |
Performance Comparison of Convolutional AutoEncoders, Generative Adversarial Networks and Super-Resolution for Image Compression
|
eess.IV
|
Image compression has been investigated for many decades. Recently, deep
learning approaches have achieved a great success in many computer vision
tasks, and are gradually used in image compression. In this paper, we develop
three overall compression architectures based on convolutional autoencoders
(CAEs), generative adversarial networks (GANs) as well as super-resolution
(SR), and present a comprehensive performance comparison. According to
experimental results, CAEs achieve better coding efficiency than JPEG by
extracting compact features. GANs show potential advantages on large
compression ratio and high subjective quality reconstruction. Super-resolution
achieves the best rate-distortion (RD) performance among them, which is
comparable to BPG.
|
electrics
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.