Image Processing Projects

IEEE Digital Image Processing projects for Engineering students-M.Tech, B.Tech, BE, MS, MCA, Students.

Matlab image processing projects with source code and IEEE papers

Image Processing or Digital Image Processing is a technique to improve image quality by applying mathematical operations. Image Processing Projects involve modifying images by identification of their two-dimensional signal and enhancing it by comparing with the standard signal. The second technique of the image processing project is to modify characteristic parameters related to digital images. In either way, you want a project on image processing we can help you.

Your project on image processing will be distinct and you can choose from multiple IEEE papers on image processing. CITL offers Image Processing projects for Final year engineering and computer science students, IEEE Projects based on Image Processing, Mini Image Processing Projects. Choose your final year project on image processing from our latest 2022 IEEE image processing projects or get help on your final year project idea and digital image processing tutorial.

Top 200+ Image Processing Projects – Source Code and Abstracts.

1.Efficient Protection of Palms from RPW Larvae using Wireless Sensor Networks

Red Palm Weevil (Rhynchophorus ferrugineus) is one of the most serious pests of coconut (Cocos nucifera L.) palms. It is known to attack 20 palm species worldwide. Due to the concealed nature of feeding RPW, infestation is detected during the last stages and farmers become aware of the problem only when the tree is about to die. The acoustic activity of RPW larvae (inside an offshoot and base of leaves) consists of chewing, crawling, emission and quick oscillating sounds.

In this paper, acoustic techniques are used to detect hidden larvae infestations of coconut palm trees in the early stages that are recorded using wireless sensor network establishing ad-hoc network.The fundamental frequency of the acoustic activity generated by the RPW larvae also contains environmental noise which is captured by the wireless sensors (nodes) fixed to the palms and transmitted to the server through access points covering number of palms arranged in the form of hexagon for processing using MatLab tools.

This method is inexpensive when compared to the existing methods for the detection of RPW larvae. The simulation results are encouraging in establishing the detection of larvae (grubs) inside the palm, thus enabling the farmer to take up the control measures before the damage reaches the economic threshold.

2.Infant cry analysis and detection

In this paper we propose an algorithm for automatic detection of an infant cry. A particular application of this algorithm is the identification of a physical danger to babies, such as situations in which parents leave their children in vehicles. The proposed algorithm is based on two main stages. The first stage involves feature extraction, in which pitch related parameters, MFC (mel-frequency cepstrum) coefficients and short-time energy parameters are extracted from the signal.

In the second stage, the signal is classified using the k-NN algorithm and is later verified as a cry signal, based on the pitch and harmonics information. In order to evaluate the performance of the algorithm in real world scenarios, we checked the robustness of the algorithm in the presence of several types of noise, and especially noises such as car horns and car engines that are likely to be present in vehicles.

In addition, we addressed real time and low complexity demands during the development of the algorithm. In particular, we used a voice activity detector, which disabled the operation of the algorithm when voice activity was not present. A database of baby cry signals was used for performance evaluation. The results showed good performance of the proposed algorithm, even at low SNR.

3.Prosody Modification of Speech and Singing For Tutoring Applications

In this work, we discuss prosodic transformations in terms of syllable durations and pitch, in the context of speech and music tutoring applications. We address some specific issues that arise with the use of TD-PSOLA based time- and pitch-scaling in the context of the singing and speech transformation to pre-defined target prosody. Time alignment is performed by matching automatically detected syllable onsets of the source and target followed by time-scaling and pitch-shifting using TD-PSOLA with attention to the choice of pitch marks and analysis-synthesis windows. Experiments demonstrate that TD-PSOLA can provide artifact free perceived quality without explicit pitch mark detection by using longer analysis synthesis windows.

4.Comparative study of color iris recognition: DCT vs. vector quantization approaches in rgb and hsv color spaces

Security is obligatory for digital world. It requires robust and reliable security mechanisms which comprises irreplaceable identification of individual. Biometrics plays an important role in recognizing individual uniquely, furthermore iris based security is more impenetrable as compared to fingerprint based security. Also, human iris doesn't change with ageing and can be easily captured.

Generic iris recognition process includes lots of preprocessing such as iris localization, hence become time intensive; further, if preprocessing is not done properly, lead to poor accuracy because of noisy image.In this paper, an iris reorganization system is proposed that provides good accuracy even after eliminating iris localization step which is considered as one of the mandatory step in literature.

It evaluates an impact on accuracy for different color spaces namely, Hue Saturation Value (HSV) and Red Green Blue (RGB); better performance is observed using HSV color space. This work also evaluates accuracy of feature extraction techniques, Discrete Cosine Transform (DCT) and Vector quantization (VQ) algorithms such as Linde Buzo Gray (LBG) and Kekre's Fast Codebook Generation (KFCG) in both RBG and HSV color spaces. It is observed that a Vector quantization algorithm performs better in HSV color space.

5.Video analytics for traffic monitoring: A moving objects based real-time defogging method for traffic monitoring videos

In this paper, a moving objects based real-time defogging method for traffic monitoring videos is proposed. Firstly, dark channel prior based image defogging method has been improved. Then, the proposed image defogging method is used for traffic monitoring video defogging. To improve the processing speed, the correlation between the adjacent frames of videos is exploited. The moving objects are detected using adjacent frame difference method.

The frame content is divided into moving foreground and background. Afterwards, the foreground and background are processed with different defogging manners to reduce the computational complexity of defogging processing. Experimental results show that the proposed method can generate a good defogging effect which will facilitate the subsequent intelligent traffic analysis. Furthermore, the proposed method is fast enough to process the standard-definition videos at the speed of 26 frames per second on average.

6.Simultaneous Feature and Dictionary Learning for Image Set Based Face Recognition

In this paper, we propose a simultaneous feature and dictionary learning (SFDL) method for image set based face recognition, where each training and testing example contains a set of face images which were captured from different variations of pose, illumination, expression, resolution and motion. While a variety of feature learning and dictionary learning methods have been proposed in recent years and some of them have been successfully applied to image set based face recognition, most of them learn features and dictionaries for facial image sets individually, which may not be powerful enough because some discriminative information for dictionary learning may be compromised in the feature learning stage if they are applied sequentially, and vice versa.

To address this, we propose a SFDL method to learn discriminative features and dictionaries simultaneously from raw face pixels so that discriminative information from facial image sets can be jointly exploited by a one-stage learning procedure. To better exploit the nonlinearity of face samples from different image sets, we propose a deep SFDL (D-SFDL) method by jointly learning hierarchical non-linear transformations and class-specific dictionaries to further improve the recognition performance. Extensive experimental results on five widely used face datasets clearly show that our SFDL and D-SFDL achieve very competitive or even better performance with the state-of-the-arts.

7.Contrast and color improvement based haze removal of underwater images using fusion technique

Scattering and absorption of light in water leads to degradation of images captured under the water. This degradation includes diminished colors, low brightness and undistinguishable objects in the image. To improve the quality of such degraded images, we have proposed fusion based underwater image enhancement technique that focuses on improving of the contrast and color of underwater images using contrast stretching and Auto White Balance. Our proposed method is very simple and straightforward that contributes greatly in uplifting the visibility of underwater images.

8.An adaptive image dehazing algorithm based on dark channel prior

Traditional dehazing algorithm based on dark channel prior may suffer weak robustness against the variation of hazy weather and may fail in bright regions. To resolve these issues, this paper proposes an improved adaptive dehazing algorithm based on dark channel prior. Our method can adaptively calculate dehazing parameter, such as the degree of haze removal. Here the dehazing parameters are local, rather than global variables.

We compute the local dehazing parameter automatically according to haze distribution, which makes our method being able to handle different dehazing degrees under various weather conditions, and makes haze removal more robust. We also propose a new method to optimize the rough transmission parameters, which can help to remove the distortion in bright regions. Experiments confirm the advantages of our method, such as robustness against different scenes, high color fidelity of the restored images and greatly enhanced details of the hazy regions.

9.Robust and Fast Detection of Moving Vehicles in Aerial Videos using Sliding Windows

The detection of vehicles driving on busy urban streets in videos acquired by airborne cameras is challenging due to the large distance between camera and vehicles, simultaneous vehicle and camera motion, shadows, or low contrast due to weak illumination. However, it is an important processing step for applications such as automatic traffic monitoring, detection of abnormal behaviour, border protection, or surveillance of restricted areas.

In contrast to commonly applied object segmentation methods based on background subtraction or frame differencing, we detect moving vehicles using the combination of a track-before-detect (TBD) approach and machine learning: an AdaBoost classifier learns the appearance of vehicles in low resolution and is applied within a sliding window algorithm to detect vehicles inside a region of interest determined by the TBD approach. Our main contribution lies in the identification, optimization, and evaluation of the most important parameters to achieve both high detection rates and real-time processing.

10.A General Video Surveillance Framework for Animal Behavior Analysis

This paper proposes a general intelligent video surveillance monitoring system to explore and examine some problems in animal behavior analysis particularly in cow behaviors. In this concern, farmers, animal health professionals and researchers have well recognized that analysis of changes in the behavioral patterns of cattle is an important factor for an animal health and welfare management system.

Also, in today dairy world, farm sizes are growing larger and larger, as a result the attention time limits for individual animals smaller and smaller.Thus, video based monitoring system will become an emerging technology approaching to an era of intelligent monitoring system. In this context, image processing is a promising technique for such challenging system because it is relatively low cost and simple enough to implement.

One of important issues in the management of group-housed livestock is to make early detection of abnormal behaviors of a cow. Particularly failure in detecting estrus in timely and accurate manner can be a serious factor in achieving efficient reproductive performance.Another aspect is concerned with health management to identify unhealthy or poor health such as lameness through analysis of measured motion data.

Lameness is a one of the biggest health and welfare issue in modern intensive dairy farming. Although there has been a tremendous amount of methods for detecting estrus, still it needs to improve for achieving a more accurate and practical. Thus in this paper, a general intelligent video surveillance system framework for animal behavior analysis is proposed to be by using (i) various types of Background Models for target or targets extraction, (ii) Markov and Hidden Markov models for detection of various types of behaviors among the targets, (iii) Dynamic Programming and Markov Decision Making Process for producing output results. As an illustration, a pilot experiment will be performed to confirm the feasibility and validity of the proposed framework.

11.An Analytic Gabor Feedforward Network for Single-sample and Pose-invariant Face Recognition

Gabor magnitude is known to be among the most discriminative representations for face images due to its space frequency co-localization property. However, such property causes adverse effects even when the images are acquired under moderate head pose variations. To address this pose sensitivity issue as well as other moderate imaging variations, we propose an analytic Gabor feedforward network which can absorb such moderate changes.

Essentially, the network works directly on the raw face images and produces directionally projected Gabor magnitude features at the hidden layer. Subsequently, several sets of magnitude features obtained from various orientations and scales are fused at the output layer for final classification decision.

The network model is analytically trained using a single sample per identity. The obtained solution is globally optimal with respect to the classification total error rate. Our empirical experiments conducted on five face datasets (six subsets) from the public domain show encouraging results in terms of identification accuracy and computational efficiency.

12.Fire detection using infrared images for UAV-based forest fire surveillance

Unmanned aerial vehicle (UAV) based computer vision system, as a more and more promising option for forest fires surveillance and detection, is now widely employed. In this paper, an image processing method for the application to UAV is presented for the automatic detection of forest fires in infrared (IR) images. The presented algorithm makes use of brightness and motion clues along with image processing techniques based on histogram-based segmentation and optical flow approach for fire pixels detection.

First, the histogram-based segmentation is used to extract the hot objects as fire candidate regions. Then, the optical flow method is adopted to calculate motion vectors of the candidate regions. The motion vectors are also further analyzed to distinguish fires from other fire analogues. Through performing morphological operations and blob counter method, a fire can be finally tracked in each IR image. Experimental results verified that the designed method can effectively extract and track fire pixels in IR video sequences.

13.Gait recognition using Active Energy Image and Gabor wavelet

Recently, Gait recognition has gained significant attention. It is the identification of individuals in a video sequences, the recognition method is by the way they walk. Active Energy Image (AEI) is a more efficient represent method than Gait Energy Image (GEI), Gabor wavelet is used in face recognition successfully, so we use the Gabor wavelet to extract the amplitude spectral of AEI, research the recognition ability of amplitude feature. The algorithm is tested in the CASIA Datasets B and gain high correct recognition rates.

14.Bilateral Two-Dimensional Neighborhood Preserving Discriminant Embedding for Face Recognition

In this paper, we propose a novel bilateral 2-D neighborhood preserving discriminant embedding for supervised linear dimensionality reduction for face recognition. It directly extracts discriminative face features from images based on graph embedding and Fisher's criterion. The proposed method is a manifold learning algorithm based on graph embedding criterion, which can effectively discover the underlying nonlinear face data structure.

Both within-neighboring and between-neighboring information are taken into account to seek an optimal projection matrix by minimizing the intra-class scatter and maximizing the inter-class scatter based on Fisher's criterion. The performance of the proposed method is evaluated and compared with other face recognition schemes on the Yale, PICS, AR, and LFW databases. The experiment results demonstrate the effectiveness and superiority of the proposed method as compared with the state-of- the-art dimensionality reduction algorithms.

15.K-nearest correlated neighbor classification for Indian sign language gesture recognition using feature fusion

A sign language recognition system is an attempt to bring the speech and the hearing impaired community closer to more regular and convenient forms of communication. Thus, this system requires to recognize the gestures from a sign language and convert them to a form easily understood by the hearing. The model that has been proposed in this paper recognizes static images of the signed alphabets in the Indian Sign Language. Unlike the alphabets in other sign languages like the American Sign Language and the Chinese Sign language, the ISL alphabet are both single-handed and double-handed.

Hence, to make recognition easier the model first categorizes them as single-handed or double-handed. For both categories two kinds of features, namely HOG and SIFT, are extracted for a set of training images and are combined in a single matrix. After which, HOG and SIFT features for the input test image are combined with the HOG and SIFT feature matrices of the training set. Correlation is computed for these matrices and is fed to a K-Nearest Neighbor Classifier to obtain the resultant classification of the test image.

16.Human activity recognition using neural networks

This paper presents research made for independent daily life assistance of elderly or persons with disabilities using IoT technologies. Our scope is to develop a system that allows living for as long as possible in familiar environment. This will be possible by wider spread of assistive technologies and the internet of things (IoT). We aim to bring together latest achievements in domain of Internet of things and assistive technologies in order to develop a complex assistive system with adaptive capability and learning behavior.

We can use IoT technologies to monitor in real time the state of a patient or to get sensitive data in order to be subsequently analyzed for a medical diagnosis. We present the state of our work related to the development of an assistive assembly consisting of a smart and assistive environment, a human activity and health monitoring system, an assistive and telepresence robot, together with the related components and cloud services.

17.A Comparative Study On Video Steganography in Spatial and IWT Domain

Steganography is a technique for embedding digital information inside another digital medium such as text, images, audio signals or video signals, without revealing its presence in the medium. In video steganography, a video file will be used as a cover medium within which any secret message can be embedded. In steganography, the secret information can be hidden either directly by altering the pixel values of the images in the spatial domain or in the frequency components of the images after transforming the images into frequency domain by using transformation algorithms such as DCT (Discrete Cosine Transform), DWT(Discrete Wavelet Transform) and IW(Integer Wavelet Transform).

In this paper, secret data are embedded inside a video file using both the methods, spatial and frequency, and the outcomes are analysed and compared. Results are compared based on PSNR (Peak Signal to Noise Ratio), MSE (Mean Square Error), BER (Bit Error Rate) and Standard Deviation. The findings of this study are given as suggestions for further enhancement.

18.A Decomposition Framework for Image Denoising Algorithms

In this paper, we consider an image decomposition model that provides a novel framework for image denoising. The model computes the components of the image to be processed in a moving frame that encodes its local geometry (directions of gradients and level lines). Then, the strategy we develop is to denoise the components of the image in the moving frame in  rder to preserve its local geometry, which would have been more affected if processing the image directly. Experiments on a whole image database tested with several denoising methods show that this framework can provide better results than denoising the image directly, both in terms of Peak signal-to-noise ratio and Structural similarity index metrics.

19.Lung cancer detection using digital Image processing On CT scan Images

Lung cancer main disease cause of death of among throughout the world. Lung cancer is causing very high mortality rate. There are various cancer tumors such as lung cancer, breast Cancer, etc. Early stage detection of lung cancer is important for successful treatment. Diagnosis is based on Computed Tomography (CT ) images. In this Histogram Equalization used to preprocessing of the images and feature extraction process and classifier to check the condition of a patient in its early stage whether it is normal or abnormal.

20.Tumor Detection in Brain MRI Image Using Template based K-means and Fuzzy C-means Clustering Algorithm.

This paper presents a robust segmentation method which is the integration of Template based K-means and modified Fuzzy C-means (TKFCM) clustering algorithm that, reduces operators and equipment error. In this method, the template is selected based on convolution between gray level intensity in small portion of brain image, and brain tumor image. K-means algorithm is to emphasized initial segmentation through the proper selection of template.

Updated membership is obtained through distances from cluster centroid to cluster data points, until it reaches to its best. This Euclidian distance depends upon the different features i.e. intensity, entropy, contrast, dissimilarity and homogeneity of coarse image, which was depended only on similarity in conventional FCM. Then, on the basis of updated membership and automatic cluster selection, a sharp segmented image is obtained with red marked tumor from modified FCM technique.

The small deviation of gray level intensity of normal and abnormal tissue is detected through TKFCM. The performances of TKFCM method is analyzed through neural network provide a better regression and least error. The performance parameters show relevant results which are effective in detecting tumor in multiple intensity based brain MRI image.

21.Tumor segmentation by fusion of MRI images using copula based Statistical methods.

In this paper, we propose a statistical fusion approach to fuse three different cerebral MRI sequences (T1, T2 and FLAIR) in order to segment tumoral volume. As T1, T2 and FLAIR provide complementary information, we propose a new fusion method based on copula which is capable to represent statistical dependency between different modalities.

Indeed, the copula is a functional dependency measure which is able to identify complementary information in case of independence and to eliminate redundant information in case of dependance. To take into account the dependency, our segmentation is based on a Hidden Markov Field (HMF) statistical model in which the observation distribution is a multivariate distribution whose margins represent intensity distributions for the individual modalities and the copula represents dependency between the modalities.

In this paper, we present a tumor segmentation based on HMF using no-standardized Gamma distributions for the margins to model tumor tissue distributions, and a Gaussian copula for describing the dependency between T1, T2 and FLAIR. Real MRI images for different patients are used to evaluate quantitatively and qualitatively our method. A comparison between individual and multi-tracer segmentations shows advantages of the proposed fusion method.

22.Image Quality Improvement in Kidney Stone Detection on Computed Tomography Images

Kidney-Urine-Belly computed tomography (KUB CT) analysis is an imaging modality that has the potential to enhance kidney stone screening and diagnosis. This study explored the development of a semi-automated program that used image processing techniques and geometry principles to define the boundary, and segmentation of the kidney area, and to enhance kidney stone detection. It marked detected kidney stones and provided an output that identifies the size and location of the kidney based on pixel count.

The program was tested on standard KUB CT scan slides from 39 patients at Imam Reza Hospital in Iran who were divided into two groups based on the presence and absence of kidney stones in their hospital records. Of these, the program generated six inconsistent results which were attributed to the poor quality of the original CT scans. Results showed that the program has 84.61 per cent accuracy, which suggests the program’s potential in diagnostic efficiency for kidney stone detection.

23.Technique for QRS complex detection using particle swarm optimization

A new technique for QRS complex detection of electrocardiogram signals, using particle swarm optimisation (PSO)- based adaptive filter (AF), is proposed. In the proposed method, the AF, based on PSO, is used to generate the feature. An effective detection algorithm, containing search-backs for missed peaks, is also proposed. In the experiment, five PSO variants are tested on MIT-BIH arrhythmia database. The linear decreasing inertia variant of PSO, achieves the best results with sensitivity, positive predictivity and detection error rate of 99.75, 99.83 and 0.42%, respectively. Effectiveness of the proposed method is validated by comparing fidelity parameter of proposed method with state-of-the-art methods.

24.Fractal Image Compression based on Polynomial Interpolation

Yes of today, image compression is still being enhanced bringing new mathematical methods in this interesting field aiming to reduce images’ sizes as well as maintaining good level of their corresponding reconstructed images. In this paper, we present a new image compression technique based on polynomial interpolation. It is based on the interpolation of image pixels and converting them into polynomial factors. Along our research, we have also proposed some enhancements to the original approach such as subimage sorting in order to reach better compression measures. Preliminary results show a promising improvement in terms of compression ratio and peak to noise ratio compared to peer techniques.

25.Brain tumor segmentation based on a hybrid clustering technique

Image segmentation refers to the process of partitioning an image into mutually exclusive regions. It can be considered as the most essential and crucial process for facilitating the delineation, characterization, and visualization of regions of interest in any medical image. Despite intensive research, segmentation remains a challenging problem due to the diverse image content, cluttered objects, occlusion, image noise, non-uniform object texture, and other factors.

There are many algorithms and techniques available for image segmentation but still there needs to develop an efficient, fast technique of medical image segmentation. This paper presents an efficient image segmentation approach using K-means clustering technique integrated with Fuzzy C-means algorithm. It is followed by thresholding and level set segmentation stages to provide an accurate brain tumor detection. The proposed technique can get benefits of the K-means clustering for image segmentation in the aspects of minimal computation time.

In addition, it can get advantages of the Fuzzy C-means in the aspects of accuracy. The performance of the proposed image segmentation approach was evaluated by comparing it with some state of the art gmentation algorithms in case of accuracy, processing time, and performance. The accuracy was evaluated by comparing the results with the ground truth of each processed image. The experimental results clarify the effectiveness of our proposed approach to deal with a higher number of segmentation problems via improving the segmentation quality and accuracy in minimal execution  time.

26.A perception based color image adaptive watermarking scheme in YCbCr space.

Copyright protection has now become a challenging domain in real life scenario. Digital watermarking scheme is an important tool for copyright protection technique. A good quality watermarking scheme should have high perceptual transparency, and should also be robust enough against possible attacks. A well-known (Lewis-Barni) Human Visual System (HVS) based watermarking model is fairly successful with respect to the first mentioned criterion, though its effectiveness in color images has not been claimed.

Furthermore, it is true that although several watermarking schemes are available inliterature for grayscale images, relatively few works have been done in color image watermarking, and the little that have been done, have mostly been tested in RGB, YUV, YIQ color spaces. Thus the question remains that, which is the optimal color space for color image watermarking and whether this HVS model is applicable for that color space. There are two main contributions of the present work with respect to the above. First, it claims that for color image watermarking, the YCbCr space can be used as the perceptually optimum color space, the Cb component being the optimal color channel here.

Second, it also tests the effectiveness of the above-mentioned HVS model in that color space. These have been achieved by using the HVS model to propose a new non-blind (original image and the watermark logo image both are needed for extraction) image adaptive Discrete Wavelet transform and Singular Value Decomposition (DWTSVD) based color image watermarking scheme in YCbCr color space. The multi-resolution property of DWT and stability of SVD additionally makes the scheme robust against attacks, while the Arnold scrambling, of the watermark, enhances the security in our method. The experimental results support the superiority of our scheme over the existing methods.

27.Robust Watermarking by SVD of Watermark Embedded in DKT-DCT and DCT Wavelet Column Transform of Host Image

Watermarking in wavelet domain and with SVD is popular due to its robustness. In this paper a watermarking technique using DCT wavelet and hybrid DKT-DCT wavelet along with SVD is proposed. Wavelet transform is applied on host and SVD is applied on watermark. Few singular values of watermark are embedded in mid frequency band of host. Scaling of singular values is adaptively done for each channel (Red, green and blue) using the highest transform coefficient from selected mid frequency band and first singular value of corresponding channel of watermark.

Singular values of watermark are placed at the index positions of closely matching transform coefficients. This along with the adaptive selection of scaling factor adds to the robustness of watermarking technique. Performance of the proposed technique is evaluated against image processing attacks like cropping, compression using orthogonal transforms, noise addition, histogram equalization and resizing. Performance for DCT wavelet and DKT-DCT wavelet is compared and in many of the attacks DCT wavelet is found to be better than DKT-DCT wavelet.

28.Study and Analysis of Robust DWT-SVD Domain Based Digital Image Watermarking Technique Using MATLAB

This paper presents a robust and blind digital image watermarking technique to achieve copyright protection. In order to protect copyright material from illegal duplication, various technologies have been developed, like key-based cryptographic technique, digital watermarking etc. In digital watermarking, a signature or copyright message is secretly embedded in the image by using an algorithm. In our paper, we implement that algorithm of digital watermarking by combining both DWT and SVD techniques.

Initially, we decompose the original (cover) image into 4 sub-bands using 2-D DWT, and then we apply the SVD on each band by modifying their singular values. After subjecting the watermarked image to various attacks like blurring, adding noise, pixelation, rotation, rescaling, contrast adjustment, gamma correction, histogram equalization, cropping, sharpening, lossy compression etc,

we extract the originally inserted watermark image from all the bands and compare them on the basis of their MSE and PSNR values. Experimental results are provided to illustrate that if we perform modification in all frequencies, then it will make our watermarked image more resistant to a wide range of image processing attacks (including common geometric attacks), i.e. we can recover the watermark from any of the four sub-bands efficiently.

29.Towards Practical Self-Embedding for JPEG-Compressed Digital Images

This paper deals with the design of a practical self-recovery mechanism for lossy compressed JPEG images. We extend a recently proposed model of the content reconstruction problem based on digital fountain codes to take into account the impact of emerging watermark extraction and block classification errors. In contrast to existing methods, our scheme guarantees a high and stable level of reconstruction quality. Instead of introducing reconstruction artifacts, emerging watermark extraction errors penalize the achievable tampering rates.

We introduce new mechanisms that allow for handling high-resolution and color images efficiently. In order to analyze the behavior of our scheme, we derive an improved model to calculate the reconstruction success probability. We introduce a new hybrid mechanism for spreading the reference information over the entire image, whichallows to find a good balance between the achievable tampering rates and the computational complexity. Such an approach reduced the watermark embedding time from the order of several minutes to the order of single seconds, even on mobile devices.

30.Fusion of MS and PAN Images Preserving Spectral Quality

Image fusion aims at improving spectral information in a fused image as well as adding spatial details to it. Among the existing fusion algorithms, filter-based fusion methods are the most frequently discussed cases in recent publications due to their ability to improve spatial and spectral information of multispectral (MS) and panchromatic (PAN) images. Filter-based approaches extract spatial information from the PAN image and inject it into MS images. Designing an optimal filter that is able to extract relevant and nonredundant information from the PAN image is presented in this letter.

The optimal filter coefficients extracted from statistical properties of the images are more consistent with type and texture of the remotely sensed images compared with other kernels such as wavelets. Visual and statistical assessments show that the proposed algorithm clearly improves the fusion quality in terms of correlation coefficient, relative dimensionless global error in synthesis, spectral angle mapper, universal image quality index, and quality without reference, as compared with fusion methods, including improved intensity–hue–saturation, multiscale Kalman filter, Bayesian, improved nonsubsampled contourlet transform, and sparse fusion of image.

Index Terms—Directional filter, image fusion, optimal filter, pan-sharpening, spectral information.

31.Multifocus Image Fusion Based on NSCT and Focused Area Detection

To overcome the difficulties of sub-band coefficients selection in multiscale transform domain-based image fusion and solve the problem of block effects suffered by spatial domain-based image fusion, this paper presents a novel hybrid multifocus image fusion method. First, the source multifocus images are decomposed using the nonsubsampled contourlet transform (NSCT). The low-frequency sub-band coefficients are fused by the sum-modified-Laplacian-based local visual contrast, whereas the high-frequency sub-band coefficients are fused by the local Log-Gabor energy.

The initial fused image is subsequently reconstructed based on the inverse NSCT with the fused coefficients. Second, after analyzing the similarity between the previous fused image and the source images, the initial focus area detection map is obtained, which is used for achieving the decision map obtained by employing a mathematical morphology postprocessing technique. Finally, based on the decision map, the final fused image is obtained by selecting the pixels in the focus areas and retaining the pixels in the focus region boundary as their corresponding pixels in the initial fused image.

Experimental results demonstrate that the proposed method is better than various existing transform-based fusion methods, including gradient pyramid transform, discrete wavelet transform, NSCT, and a spatial-based method, in terms of both subjective and objective evaluations.

Index Terms—Multi-focus image fusion, non-subsampled contourlet transform, Log-Gabor energy, focused area detection, mathematical morphology

32.Optimizing Image Segmentation by Selective Fusion of Histogram based K-Means Clustering

We present a simple, reduced-complexity and efficient image segmentation and fusion approach. It optimizes the segmentation process of coloured images by fusion of histogram based K-means clusters in various colour spaces. The initial segmentation maps are produced by taking a local histogram of each pixel and allocating it to a bin in the re-quantized colour space. The pixels in the re-quantized colour spaces are clustered into classes using the K-means (Euclidean Distance) technique. The initial segmentation maps from the six colour spaces are then fused together by various techniques and performance metrics are evaluated.

A selective fusion procedure is followed to reduce the computational complexity and achieve a better segmented image. The parameters considered for selection of initial segmentation maps include entropy, standard deviation and spatial frequency etc. The performance of the proposed method is analysed by applying on various images from Berkeley image database. The results indicate an increased entropy in the segmented image as compared to other methods along with reduced complexity, processing time and hardware resourcesrequired for real time implementation.
Index Terms—Berkeley image database, colour spaces, fusion, histogram, image segmentation and K-Means clustering.

33.Medical Image Fusion by Combining SVD and Shearlet Transform

The method of incorporating information from multiple images into a single image to get enhanced imaging quality and reduce randomness and redundancy in medical images for diagnosis and assessment of medical problems. In this paper, we present a new technique for medical image fusion using Singular Value Decomposition (SVD) method on Shearlet Transform (ST) domain to improve the information content of an image by fusing images like positron emission tomography (PET) and magnetic resonance imaging (MRI) images. The proposed method first transforms the source image into shearlet-image by using Shearlet Transform (ST). Then, we have used SVD model in lowpass sub-band and selected modified sub-bands according to their local characteristics. The composition of different high-pass subband coefficients are processed by ST decomposition.Then the high and the low sub-band are fused. Finally, the fused image is reconstructed by performing the inverse shearlet transform (IST). We have used three benchmark images to carry out our experiment and compare with many state-of-art techniques. Experimental results demonstrate that the proposed method outperforms many state-of-the-art techniques in both subjective and objective evaluation criteria.

34.Comparison of Pixel-Level and Feature Level Image Fusion Methods

In recent times multiple imaging sensors are employed in several applications such as surveillance, medical imaging and machine vision. In these multi-sensor systems there is a need for image fusion techniques to effectively combine the information from disparate imaging sensors into a single composite image which enables a good understanding of the scene. The prevailing fusion algorithms employ either the mean or choose-max fusion rule for selecting the best coefficients for fusion at each pixel location. The choose-max rule distorts constants background information whereas the mean rule blurs the edges.

Hence, in this proposed paper, the fusion rule is replaced by a soft computing technique that makes intelligent decisions to improve the accuracy of the fusion process in both pixel and feature based image fusion. Non Sub-sampled Contourlet Transform (NSCT) is employed for multi-resolution decomposition as it is demonstrated to capture the intrinsic geometric structures in images effectively. Experiments demonstrate that the proposed pixel and feature level image fusion methods provides better visual quality with clear edge information and objective quality indexes than individual multiresolution-based methods such as discrete wavelet transform and NSCT.

35.A New Secure Image Transmission Technique via Secret-Fragment-Visible Mosaic Images by Nearly Reversible Color Transformations

A new secure image transmission technique is proposed, which transforms automatically a given large-volume secret image into a so-called secret-fragment-visible mosaic image of the same size. The mosaic image, which looks similar to an arbitrarily selected target image and may be used as a camouflage of the secret image, is yielded by dividing the secret image into fragments and transforming their color characteristics to be those of the corresponding blocks of the target image.

Skillful techniques are designed to conduct the color transformation process so that the secret image may be recovered nearly losslessly. A scheme of handling the overflows/underflows in the converted pixels’ color values by recording the color differences in the untransformed color space is also proposed. The information required for recovering the secret image is embedded into the created mosaic image by a lossless data hiding scheme using a key. Good experimental results show the feasibility of the proposed method.

36.SUBSENSE: A Universal Change Detection Method With Local Adaptive Sensitivity

Foreground/background segmentation via changedetection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatio temporal binary features as well as color information to detect changes.

This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored.Besides, instead of using manually set, frame-wide constants todictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method’s internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels.

This new approach enables us to out perform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used nolow-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.

37.Flower Classification Using Neural Network Based Image Processing

40.Retinal Disease Screening through Local Binary Patterns

This work investigates discrimination capabilities in the texture of fundus images to differentiate between pathological and healthy images. For this purpose, the performance of Local Binary Patterns (LBP) as a texture descriptor for retinal images has been explored and compared with other descriptors such asLBP filtering (LBPF) and local phase quantization (LPQ).

The goal is to distinguish between diabetic retinopathy (DR), age related macular degeneration (AMD) and normal fundus images analysing the texture of the retina background and avoiding a previous lesion segmentation stage. Five experiments (separating DR from normal, AMD from normal, pathological from normal,DR from AMD and the three different classes) were designed and validated with the proposed procedure obtaining promising results.

For each experiment, several classifiers were tested. An average sensitivity and specificity higher than 0.86 in all the cases and almost of 1 and 0.99, respectively, for AMD detection were achieved. These results suggest that the method presented in this paper is a robust algorithm for describing retina texture and can be useful in a diagnosis aid system for retinal disease screening.

41.Application of content based Image Retrieval in Diagnosis Brain Disease

Content Based Image Retrieval (CBIR) systems retrieve brain images from that database which are similar to the query image. CBIR is the application of computer vision. That has been one on the most vivid research areas in the field of computer vision over the last10 years. Instead of text based searching, CBIR efficiently retrieves images that are visually similar toquery image. In CBIR query is given in the form of image. This paper aims to provide an efficient medical image data Retrieval in Diagnosis Brain Disease.

42.Robust Combination Method for Privacy Protection Using Fingerprint and Face Biometrics

Secure advance system for fingerprint privacy protection by combining different biometrics fingerprint and face into a new identity is proposed. In an enrollment, one fingerprint and face images are captured from same person. Then the minutiae positions and orientation from fingerprint and the reference points from both biometrics are extracted. LDN extracts directional information from face. To compute LDN features from face, face image is divided into some parts. LDN features allocation is taken out from face-parts.

Then concatenate these features into feature vector, and use it as a face descriptor.Based on this extracted information and proposed coding strategies, combined template is generated and then stored in a database. In the verification, the system requires two queries; one fingerprint and one face from the same person. The two-step fingerprint matching algorithm is used for matching the fingerprint of same person against the generated combined minutiae template.For the face, chi-square dissimilarity measure is used for matching feature vectors of the person which are compared with all feature vectors of persons present in dataset.

Fingerprint-face reconstruction approach is used to create combined fingerprint-face image from combined template. Hence, a virtual identity is nothing but the reconstructed image created from the two biometrics one fingerprint and one face andis used for matching purpose.FRR and FAR of the proposed system is low and is 1% each. Work proposed can create better identity when fingerprint-face images are randomly taken.

43.Pointwise Shape-Adaptive DCT forcHigh-Quality Denoising and De blocking of Grayscale and Color Images

The shape-adaptive DCT (SA-DCT) transform can be computed on a support of arbitrary shape, but retains a computational complexity comparable to that of the usual separable block-DCT (B-DCT). Despite the near-optimal decorrelationand energy compaction properties, application of the SA-DCT has been rather limited, targeted nearly exclusively to videocompression.In this paper we present a novel approach to image Þlteringbased on the SA-DCT. We use the SA-DCT in conjunction with the Anisotropic Local Polynomial Approximation (LPA) - Intersection of ConÞdence Intervals (ICI) technique, which deÞnes the shape of the transforms support in a pointwise adaptive manner.The thresholded or attenuated SA-DCT coefÞcients are used to reconstruct a local estimate of the signal within the adaptive-shape support.

Since supports corresponding to different points are in general overlapping, the local estimates are averaged together using adaptive weights that depend on the region.statistics.This approach can be used for various image processing tasks.In this paper we consider in particular image denoising and image deblocking and deringing from block-DCT compression.A special structural constraint in luminance-chrominance space is also proposed to enable an accurate Þltering of color images.Simulation experiments show a state-of-the-art quality of theÞnal estimate, both in terms of objective criteria and visual appearance. Thanks to the adaptive support, reconstructed edges are clean.

44.Predicting trait impressions of faces using local face recognition techniques

The aim of this work is to propose a method for detecting the social meanings that people perceive in facial morphology using local face recognition techniques. Developing a reliable method to model people’s trait impressions of faces has theoretical value in psychology and human–computer interaction. The first step in creating our system was to develop a solid ground truth.

For this purpose, we collected a set of faces that exhibit strong human consensus within the bipolar extremes of the following six trait categories: intelligence, maturity, warmth, sociality, dominance, and trustworthiness.In the studies reported in this paper, we compare the performance of global face recognition techniques with local methods applying different classification systems. We find that the best performance is obtained using local techniques, where support vector machines or Levenberg-Marquardt neural networks are used as stand-alone classifiers.

System performance in each trait dimension is compared using the area under the ROC curve. Our results show that not only are our proposed learning methods capable of predicting the social impressions elicited by facial morphology but they are also in some cases able to outperform individual human performances.

45.Efficient Contrast Enhancement Using Adaptive Gamma Correction With Weighting Distribution

This paper proposes an efficient method to modify histograms and enhance contrast in digital images. Enhancement plays a significant role in digital image processing, computer vision, and pattern recognition. We present an automatic transformation technique that improves the brightness of dimmed images via the gamma correction and probability distribution of luminance pixels. To enhance video, the proposed image enhancement method uses temporal information regarding the differences between each frame to reduce computational complexity. Experimental results demonstrate that the proposed method produces enhanced images of comparable or higher quality than those produced using previous state-of-the-art methods. Index Terms—Contrast enhancement, gamma correction, histogram equalization, histogram modification

46.Pansharpening Using Regression of Classified MS and Pan Images to Reduce Color Distortion

The synthesis of low-resolution panchromatic (Pan) image is a critical step of ratio enhancement (RE) and component substitution (CS) pansharpening methods. The two types of methods assume a linear relation between Pan and multispectral (MS) images. However, due to the nonlinear spectral response of satellite sensors, the qualified low-resolution Pan image cannot be well approximated by a weighted summation of MS bands.

Therefore, in some local areas, significant gray value difference exists between a synthetic Pan image and a high-resolution Pan image. To tackle this problem, the pixels of Pan and MS images are divided into several classes by k-means algorithm, and then multiple regression is used to calculate summation weights on each group of pixels. Experimental results demonstrate that the proposed technique can provide significant improvements on reducing color distortions.

47.Fingerprint Compression Based on Sparse Representation

A new fingerprint compression algorithm based on sparse representation is introduced. Obtaining an overcomplete dictionary from a set of fingerprint patches allows us to represent them as a sparse linear combination of dictionary atoms. In the algorithm, we first construct a dictionary for predefined fingerprint image patches. For a new given fingerprint images, represent its patches according to the dictionary by computing l0-minimization and then quantize and encode the representation.

In this paper, we consider the effect of various factors on compression results. Three groups of fingerprint images are tested. The experiments demonstrate that our algorithm is efficient compared with several competing compression techniques (JPEG, JPEG 2000, andWSQ), especially at high compression ratios. The experiments also illustrate that the proposed algorithm is robust to extract minutiae.

48.Unified Blind Method for Multi-Image Super-Resolution and Single/Multi-Image Blur Deconvolution

This paper presents, for the first time, a unified blind method for multi-image super-resolution (MISR or SR), single-image blur deconvolution (SIBD), and multi-image blur deconvolution (MIBD) of low-resolution (LR) images degraded by linear space-invariant (LSI) blur, aliasing, and additive white Gaussian noise (AWGN). The proposed approach is based on alternating minimization (AM) of a new cost function with respect to the unknown high-resolution (HR) image and blurs.

The regularization term for the HR image is based upon the Huber-Markov random field (HMRF) model, which is a type of variational integral that exploits the piecewise smooth nature of the HR image. The blur estimation process is supported by an edge-emphasizing smoothing operation, which improves the quality of blur estimates by enhancing strong soft edges toward step edges, while filtering out weak structures. The parameters are updated gradually so that the number of salient edges used for blur estimation increases at each iteration.

For better performance, the blur estimation is done in the filter domain rather than the pixel domain, i.e., using the gradients of the LR and HR images. The regularization term for the blur is Gaussian (L2 norm), which allows for fast noniterative optimization in the frequency domain. We accelerate the processing time of SR reconstruction by separating the upsampling and registration processes from the optimization procedure. Simulation results on both synthetic and real-life images (from a novel computational imager) confirm the robustness and effectiveness of the proposed method.

49.Discrete Wavelet Transform and Gradient Difference based approach for text localization in videos

The text detection and localization is important for video analysis and understanding. The scene text in video contains semantic information and thus can contribute significantly to video retrieval and understanding. However, most of the approaches detect scene text in still images or single video frame. Videos differ from images in temporal redundancy. This paper proposes a novel hybrid method to robustly localize the texts in natural scene images and videos based on fusion of discrete wavelet transform and gradient difference.

A set of rules and geometric properties have been devised to localize the actual text regions. Then, morphological operation is performed to generate the text regions and finally the connected component analysis is employed to localize the text in a video frame. The experimental results obtained on publicly available standard ICDAR 2003 and Hua dataset illustrate that the proposed method can accurately detect and localize texts of various sizes, fonts and colors. The experimentation on huge collection of video databases reveal the suitability of the proposed method to video databases.

50.LBP-Based Edge-Texture Features for Object Recognition

This paper proposes two sets of novel edge-texture features, Discriminative Robust Local Binary Pattern (DRLBP) and Ternary Pattern (DRLTP), for object recognition. By investigating the limitations of Local Binary Pattern (LBP), Local Ternary Pattern (LTP) and Robust LBP (RLBP), DRLBP and DRLTP are proposed as new features. They solve the problem of discrimination between a bright object against a dark background and vice-versa inherent in LBP and LTP.

DRLBP also resolves the problem of RLBP whereby LBP codes and their complements in the same block are mapped to the same code. Furthermore, the proposed features retain contrast information necessary for proper representation of object contours that LBP, LTP, and RLBP discard. Our proposed features are tested on seven challenging data sets: INRIA Human, Caltech Pedestrian, UIUC Car, Caltech 101, Caltech 256, Brodd at z, and KTH-TIPS2- a. Results demonstrate that the proposed features outperform the compared approaches on most data sets.

51.A Pan sharpening Method Based on the Sparse Representation of Injected Details

The application of sparse representation (SR) theory to the fusion of multispectral (MS) and panchromatic images is giving a large impulse to this topic, which is recast as a signal reconstruction problem from a reduced number of measurements. This letter presents an effective implementation of this technique, in which the application of SR is limited to the estimation of missing details that are injected in the available MS image to enhance its spatial features.

We propose an algorithm exploiting the details self-similarity through the scales and compare it with classical and recent pan sharpening methods, both at reduced and full resolution. Two different data sets, acquired by the WorldView-2 and IKONOS sensors, are employed for validation, achieving remarkable results in terms of spectral and spatial quality of the fused product.

52.Image Denoising using Orthonormal Wavelet Transform with Stein Unbiased Risk Estimator

De-noising plays a vital role in the field of the image pre-processing. It is often a necessary to be taken, before the image data is analysed. It attempts to remove whatever noise is present and retains the significant information, regardless of the frequency contents of the signal. It is entirely different content and retains low frequency content. De-noising has to be performed to recover the useful information. In this process much concentration is spent on, how well the edges are preserved and how much of the noise granularity has been removed. In this paper I simulate the different thresholding techniques and compare them their PSNR. After simulation I can find that stein unbiased risk estimator is one of the best techniques for removing the noise from the image in terms of PSNR.

53.Research on the rice counting method based on connected component labelling

Rice counting is essential to modern agricultural production sector, counting accuracy directly impact assessment of the merits of rice. In order to solve the problems of time-consuming and labour-Intensive and low-precision existing in traditional manual counting and outline counting this paper uses image processing technology to count rice. Considering the rice overlapping segmentation is not ideal, this paper uses dynamic threshold method to binary the image and then extracts and labels connected domain. Finally , this method gets the number of rice through processing area of each connected domain.

54.Combined DWT-DCT Digital Watermarking Technique Software Used for CTS of Bank.

For the faster clearing of cheques, a cheques truncation system is used.CTS of bank sends electronic cheques images to drawee branch for payment through the clearing house. It is normally believed that the system is safe and secure. However, the intruders may damage the data and can degrade the quality of cheque image or can duplicate cheque image. There is necessity of security and copyright protection. In this paper, "Combination of DWT-DCT Watermarking Technique Software Used for CTS of Bank" is discussed. By using combined DWTDCT digital watermarking technique software, it is implemented imperceptibility of watermark which are supported for copyright protection and security of cheque images.

55.A Novel Secure Image Steganography Method Based On Chaos Theory In Spatial Domain

This paper presents a novel approach of building a secure data hiding technique in digital images. The image steganography technique takes the advantage of limited power of human visual system (HVS). It uses image as cover media for embedding secret message. The most important requirement for a steganographic algorithm is to be imperceptible while maximizing the size of the payload. In this paper a method is proposed to encrypt the secret bits of the message based on chaos theory before embedding into the cover image. A 3-3-2 LSB insertion method has been used for image steganography. Experimental results show a substantial improvement in the Peak Signal to Noise Ratio (PSNR) and Image Fidelity (IF) value of the proposed technique over the base technique of 3-3-2 LSB insertion.

56.Biometric Authentication using Near Infrared Images of Palm Dorsal Vein Patterns

This paper proposes an improved palm dorsal (back of hand) feature extraction algorithm for biometric personal authentication applications. The proposed method employs the existing database of near Infrared (IR) images of palm dorsal hand vein surface. The proposed system include: 1) Infrared palm dorsa images database collection; 2) Detection of Region of Interest (ROI); 3) Palm vein extraction by median filtering 4) Feature extraction using crossing number algorithm 5) Authentication using minutiae triangulation matching. The input image is segmented using an optimum thresholding algorithm. The knuckle points are used as key points for the image normalization and extraction of region of interest. The extracted ROI is processed to get the reliable vein pattern and features (minutiae) are extracted using crossing number algorithm. The scores for performing authentication are generated based on minutiae triangulation matching.

57.Improved LSB based Steganography Techniques for Color Images in Spatial Domain

This research paper aims to propose a new improved approach for Information Security in RGB Color Images using a Hybrid Feature detection technique; Two Component based Least Significant Bit (LSB) Substitution Technique and Adaptive LSB substitution technique for data hiding. Advanced Encryption Standard (AES) is used to provide Two Tier Security; Random Pixel Embedding imparts resistant to attacks and Hybrid Filtering makes it immune to various disturbances like noise. An image is combination of edge and smooth areas which gives an ample opportunity to hide information in it.

The proposed work is direct implementation of the principle that edge areas being high in contrast, color, density and frequency can tolerate more changes in their pixel values than smooth areas, so can be embedded with a large number of secret data while retaining the original characteristics of image. The proposed approach achieved Improved Imperceptibility, Capacity than the various existing techniques along with Better Resistance to various Steganalysis attacks like Histogram Analysis, Chi-Square and RS Analysis as proven experimentally.

58.A Real Time Approach for Secure Text Transmission Using Video Cryptography

Image and video are the two most basic forms of transmitting information. With the help of Image and video encryption methods any particular set of images or videos can be transmitted without worrying about security. In the proposed paper a very simple and real time algorithm, using pixel mapping, is used for the encryption of the images which are the basic building blocks of any video file. In the proposed research paper the video is distributed into the photo frames using a matlab code and all the frames are sequentially stored. Each such frame contains a combination of red, blue and green layers.

If we consider a pixel as an 8 bit value than each pixel has the value in the range of 0 to 255. In the proposed work for each frame two pixels situated at the top left and the bottom right corner are modified so as to insert text in each image. After the completion of the pixel value changing all the images is placed in a sequential manner and then all the frames are cascaded for generation of the original video file with encryption. This new video is almost similar to the original video file with no changes visible to the naked eye.

59.A Secure Image Steganography Based on RSA Algorithm and Hash-LSB Technique

Steganography is a method of hiding secret messages in a cover object while communication takes place between sender and receiver. Security of confidential information has always been a major issue from the past times to the present time. It has always been the interested topic for researchers to develop secure techniques to send data without revealing it to anyone other than the receiver. Therefore from time to time researchers have developed many techniques to fulfill secure transfer of data and steganography is one of them.

In this paper we have proposed a new technique of image steganography i.e. Hash-LSB with RSA algorithm for providing more security to data as well as our data hiding method. The proposed technique uses a hash function to generate a pattern for hiding data bits into LSB of RGB pixel values of the cover image. This technique makes sure that the message has been encrypted before hiding it into a cover image. If in any case the cipher text got revealed from the cover image, the intermediate person other than receiver can't access the message as it is in encrypted form.

60.A Novel Approach On Image Steganographic Methods For Optimum Hiding Capacity.

Steganography gained importance in the past few years due to the increasing need for providing secrecy in an open environment like the internet.Steganography is the art of hiding the fact that communication is taking place, by hiding information in other information. Many different carrier file formats can be used, but digital images are the most popular because of their frequency on the internet Steganography is used to conceal the information so that no one can sense its existence.In most algorithm used to secure information both steganography and cryptography are used together to secure a part of information.

Steganography has many technical challenges such as high hiding capacity and imperceptibility. In this paper,we try to optimize these two main requriments by proposing a novel technique for hiding data in digital images by combining the use of adaptive hiding capacity function that hides secret data in the integer wavelet coefficients of the cover image with the optimum pixel adjustment (OPA) algorithm.The coefficients used are selected according to a pseudorandom function generator to increase the security of the hidden data.The OPA algorithm is applied after embedding secret message to minimize the embedding error.The proposed system showed high hiding rates with reasonable imperceptibility compared to other steganographic system.

61.Biometric authentication using near infrared images of palm dorsal vein patterns

This paper proposes an improved palm dorsal (back of hand) feature extraction algorithm for biometric personal authentication applications. The proposed method employs the existing database of near Infrared (IR) images of palm dorsal hand vein surface. The proposed system include: 1) Infrared palm dorsa images database collection; 2) Detection of Region of Interest (ROI); 3) Palm vein extraction by median filtering 4) Feature extraction using crossing number algorithm 5) Authentication using minutiae triangulation matching.

The input image is segmented using an optimum thresholding algorithm. The knuckle points are used as key points for the image normalization and extraction of region of interest. The extracted ROI is processed to get the reliable vein pattern and features (minutiae) are extracted using crossing number algorithm. The scores for performing authentication are generated based on minutiae triangulation matching.

62.A Proposed Method In Image Steganography To Improve Image Quality With Lsb Technique

Image steganography is becoming an important area in the field of steganography. As the demand of security and privacy increases, need of hiding their secret information is going on. If a user wants to send their secret information to other persons with security and privacy he can send it by using image steganography. During the last few years lot of different methods of hiding information has been done in this field.

Some of the existing methods for hiding information give good results only in case of information gets hidden successfully. LSB is the most popular Steganography technique.It hides the secret message in the RGB image based on it its binary coding. LSB algorithm is used to hide the secret messages by using algorithm. LSB changes the image resolution quite clear as well as it is easy to attack.

It is clear that LSB changes the image resolution when the least significant bits add in the binary image format, so that image quality become burst and there become so much difference in the original image and encoded image in the respect of image quality. So to overcome this problem, In this thesis I suggested modifying the LSB technique so that we can get same image quality as it has before the encoding. The basic idea to get good image quality, I am going to modify the hiding procedure of the least significant bit. In this step I will hide two bits by two bits by taking identical values.

63.Reversible Data Hiding in Encrypted Images by Reserving Room Before Encryption

Recently, more and more attention is paid to reversible data hiding (RDH) in encrypted images, since it maintains the excellent property that the original cover can be losslessly recovered after embedded data is extracted while protecting the image content’s confidentiality. All previous methods embed data by reversibly vacating room from the encrypted images, which may be subject to some errors on data extraction and/or image restoration. In this paper, we propose a novel method by reserving room before encryption with a traditional RDH algorithm, and thus it is easy for the data hider to reversibly embed data in the encrypted image. The proposed method can achieve real reversibility, that is, data extraction and image recovery are free of any error. Experiments show that this novel method can embed more than 10 times as large payloads for the same image quality as the previous methods, such as for PSNR dB.

64.Satellite Image Fusion using Fast Discrete Curvelet Transforms

Fusion based on the Fourier and wavelet transform methods retain rich multispectral details but less spatial details from source images. Wavelets perform well only at linear features but not at non linear discontinuities because they do not use the geometric properties of structures. Curvelet transforms overcome such difficulties in feature representation. In this paper, we define a novel fusion rule via high pass modulation using Local Magnitude Ratio (LMR) in Fast Discrete Curvelet Transforms (FDCT) domain.

For experimental study of this method Indian Remote Sensing (IRS) Resourcesat-1 LISS IV satellite sensor image of spatial resolution of 5.8m is used as low resolution (LR) multispectral image and Cartosat-1 Panchromatic (Pan) of spatial resolution 2.5m is used as high resolution (HR) Pan image. This fusion rule generates HR multispectral image at 2.5m spatial resolution. This method is quantitatively compared with Wavelet, Principal component analysis (PCA), High pass filtering(HPF), Modified Intensity-Hue-Saturation (M.IHS) and Grams-Schmidth fusion methods. Proposed method spatially outperform the other methods and retains rich multispectral details.

65.A Robust Scheme for Digital Video Watermarking based on Scrambling of Watermark

The swift growth of communication networks has directed to situation that assists on-line e-commerce of digital properties. Subsequently, digital data holders can rapidly and immensely transfer multimedia contents athwart the Internet. This leads to broad curiosity in multimedia security and multimedia copyright protection. This paper proposes a robust scheme for digital video watermarking based on scrambling & then embedding the watermark into different parts of the source video according to its scene change.

Proposed algorithm is robust against the various attacks like dropping of frame, averaging and collusion. The work is started with a comprehensive investigation of modern watermarking technologies, and perceived that none of the standing arrangements is proficient of resisting all the attacks. Hence, we propose the notion of embedding different fragments of a lone watermark into dissimilar scenes of a video. The efficiency of the scheme is tested over a sequence of research, in which a number of typical image processing attacks are tested, and the robustness of the scheme is revealed using the standards of the latest Stirmark test

66.Medical Image Fusion Based on Joint Sparse Method

In this paper, a novel joint image fusion algorithm which is the hybrid of Orthogonal Matching Pursuit (OMP) and Principal Component Analysis (PCA) is proposed to properly utilize the advantages and to overcome the disadvantages of both OMP and PCA methods. Firstly, common and innovative images are extracted from the source images. Secondly, sparse PCA method is employed to fuse the information of innovative features. Then weighted average fusion is used to fuse the sparse PCA result with the common feature thereby preserving the edge information and high spatial resolution. We demonstrate this methodology on medical images from different sources and the experimental results proves the robustness of the proposed method.

67.Image processing techniques for the enhancement of brain tumor patterns

Brain tumor analysis is done by doctors but its grading gives different conclusions which may vary from one doctor to another. So for the ease of doctors, a research was done which made the use of software with edge detection and segmentation methods, which gave the edge pattern and segment of brain and brain tumor itself. Medical image segmentation had been a vital point of research, as it inherited complex problems for the proper diagnosis of brain disorders.

In this research, it provides a foundation of segmentation and edge detection, as the first step towards brain tumor grading. Current segmentation approaches are reviewed with an emphasis placed on revealing the advantages and disadvantages of these methods for medical imaging applications. The use of image segmentation in different imaging modalities is also described along with the difficulties encountered in each modality.

68.Survey on Multi-Focus Image Fusion Algorithms

Image fusion is a technique of combining source images i.e. multi-modal, multi-focus etc. to obtain a new more informative image. Multi-focus image fusion algorithm combines different images having different parts in focus. Applications of image fusion includes remote sensing, digital camera etc. This paper describes various multi-focus image fusion algorithms which uses different focus measure such as spatial frequency, energy of image laplacian, morphological opening and closing etc. The performance of these algorithms is analyzed based on how focused regions in images are determined to get a fused image
The method used for multi-focus image fusion is identifying the focused regions and combine them together to get an enhanced image.

69.Automatic retina exudates segmentation without a manually labeled training set

Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy which can be assessed by detecting exudates (a type of bright lesion) in fundus images. In this work, two new methods for the detection of exudates are presented which do not use a supervised learning step; therefore, they do not require labelled lesion training sets which are time consuming to create, difficult to obtain and prone to human error.

We introduce a new dataset of fundus images from various ethnic groups and levels of DME which we have made publicly available. We evaluate our algorithm with this dataset and compare our results with two recent exudate segmentation algorithms. In all of our tests, our algorithms perform better or comparable with an order of magnitude reduction in computational time.

70.Local Edge-Preserving Multiscale Decomposition for High Dynamic Range Image Tone Mapping

Local energy pattern, a statistical histogram-based representation, is proposed for texture classification. First, we use normalized local-oriented energies to generate local feature vectors, which describe the local structures distinctively and are less sensitive to imaging conditions. Then, each local feature vector is quantized by self-adaptive quantization thresholds determined in the learning stage using histogram specification, and the quantized local feature vector is transformed to a number by N-nary coding, which helps to preserve more structure information during vector quantization.

Finally, the frequency histogram is used as the representation feature. The performance is benchmarked by material categorization on KTH-TIPS and KTH-TIPS2-a databases. Our method is compared with typical statistical approaches, such as basic image features, local binary pattern (LBP), local ternary pattern, completed LBP,Weber local descriptor, and VZ algorithms (VZ-MR8 and VZ-Joint).

The results show that our method is superior to other methods on the KTH-TIPS2-a database, and achieving competitive performance on the KTH-TIPS database. Furthermore, we extend the representation from static image to dynamic texture, and achieve favorable recognition results on the University of California at Los Angeles (UCLA) dynamic texture database.

71.A Pan-Sharpening Based on the Non-Subsampled Contourlet Transform: Application to Worldview-2 Imagery

Two pan-sharpening methods based on the nonsubsampled contourlet transform (NSCT) are proposed. NSCT is very efficient in representing the directional information and capturing intrinsic geometrical structures of the objects. It has characteristics of high resolution, shift-invariance, and high directionality. In the proposed methods, a given number of decomposition levels are used for multispectral (MS) images while a higher number of decomposition levels are used for Pan images relatively to the ratio of the Pan pixel size to the MS pixel size.

This preserves both spectral and spatial qualities while decreasing computation time. Moreover, upsampling ofMSimages is performed afterNSCT and not before. By applying upsampling after NSCT, structures and detail information of the MS images are more likely to be preserved and thus stay more distinguishable. Hence, we propose to exploit this property in pan-sharpening by fusing it with detail information provided by the Pan image at the same fine level. The proposed methods are tested on WorldView-2 datasets and compared with the standard pan-sharpening technique. Visual and quantitative results demonstrate the efficiency of the proposed methods. Both spectral and spatial qualities have been improved.

72.PET and MRI Brain Image Fusion Using Wavelet Transform with Structural Information Adjustment and Spectral Information Patching

In this paper, we present a PET and MR brain image fusion method based on wavelet transform for low- and high-activity brain image regions, respectively. Our method can generate very good fusion result by adjusting the anatomical structural information in the gray matter (GM) area, and then patching the spectral information in the white matter (WM) area after the wavelet decomposition and gray-level fusion. We used normal axial, normal coronal, and Alzheimer’s disease brain images as the three datasets for testing and comparison. Experimental results showed that the performance of our fusion method is better than that of IHS+RIM fusion method in terms of spectral discrepancy (SD) and average gradient (AG). In fact, our method is superior to IHS+RIM method both visually and quantitatively.

73.Fuzzy C-Means Clustering With Local Information and Kernel Metric for Image Segmentation

In this paper, we present an improved fuzzy C-means (FCM) algorithm for image segmentation by introducing a tradeoff weighted fuzzy factor and a kernel metric. The tradeoff weighted fuzzy factor depends on the space distance of all neighboring pixels and their gray-level difference simultaneously. By using this factor, the new algorithm can accurately estimate the damping extent of neighboring pixels.

In order to further enhance its robustness to noise and outliers, we introduce a kernel distance measure to its objective function. The new algorithm adaptively determines the kernel parameter by using a fast bandwidth selection rule based on the distance variance of all data points in the collection. Furthermore, the tradeoff weighted fuzzy factor and the kernel distance measure are both parameter free. Experimental results on synthetic and real images show that the new algorithm is effective and efficient, and is relatively independent of this type of noise.

74.A New Iterative Triclass Thresholding Technique in Image Segmentation

We present a new method in image segmentation that is based on Otsu’s method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu’s threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu’s threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu’s method does.

The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu’s method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions.

Then, the new TBD region is processed in the similar manner. The process stops when the Otsu’s thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu’s method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

75.Adaptive and non-adaptive data hiding methods for grayscale images based on modulus function

This paper presents two adaptive and non-adaptive data hiding methods for grayscale images based on modulus function. Our adaptive scheme is based on the concept of human vision sensitivity, so the pixels in edge areas than to smooth areas can tolerate much more changes without making visible distortion for human eyes. In our adaptive scheme, the average differencing value of four neighborhood pixels into a block via a threshold secret key determines whether current block is located in edge or smooth area. Pixels in the edge areas are embedded by Q-bit of secret data with a larger value of Q than that of pixels placed in smooth areas.

Also in this scholar, we represent one non-adaptive data hiding algorithm. Our non-adaptive scheme, via an error reduction procedure, produces a high visual quality for stego-image. The proposed schemes present several advantages. 1-of aspects the embedding capacity and visual quality of stego-image are scalable. In other words, the embedding rate as well as the image quality can be scaled for practical applications 2-the high embedding capacity with minimal visual distortion can be achieved, 3-our methods require little memory space for secret data embedding and extracting phases, 4-secret keys have used to protect of the embedded secret data.

Thus, level of security is high, 5-the problem of overflow or underflow does not occur. Experimental results indicated that the proposed adaptive scheme significantly is superior to the currently existing scheme, in terms of stego-image visual quality, embedding capacity and level of security and also our non-adaptive method is better than other non-adaptive methods, in view of stego-image quality. Results show which our adaptive algorithm can resist against the RS steganalysis attack.

76.Nonedge-Specific Adaptive Scheme for Highly Robust Blind Motion Deblurring of Natural Images

Blind motion deblurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. Although significant progress has been made on tackling this problem, existing methods, when applied to highly diverse natural images, are still far from stable. This paper focuses on the robustness of blind motion deblurring methods toward image diversity—a critical problem that has been previously neglected for years. We classify the existing methods into two schemes and analyze their robustness using an image set consisting of 1.2 million natural images.

The first scheme is edge-specific, as it relies on the detection and prediction of large-scale step edges. This scheme is sensitive to the diversity of the image edges in natural images. The second scheme is nonedge-specific and explores various image statistics, such as the prior distributions. This scheme is sensitive to statistical variation over different images. Based on the analysis, we address the robustness by proposing a novel nonedge-specific adaptive scheme (NEAS), which features a new prior that is adaptive to the variety of textures in natural images. By comparing the performance of NEAS against the existing methods on a very large image set, we demonstrate its advance beyond the state-of-the-art.

77.Optimization of Segmentation Algorithms Through Mean-Shift Filtering Preprocessing

This letter proposes an improved mean-shift filtering method. The method is added as a preprocessing step for regional segmentation methods, which aims at benefiting segmentations in a more general way. Using this method, first, a logistic regression model between two edge cues and semantic object boundaries is established. Then, boundary posterior probabilities are predicted by the model and associated with weights in the mean-shift filtering iteration.

Finally, the filtered image, instead of the original image, is put into segmentation methods. In experiments, the regression model is trained with an aerial image, which is tested with an aerial image and a QuickBird image. Two popular segmentation methods are adopted for evaluations. Both quantitative and qualitative evaluations reveal that the presented procedure facilitates a superior image segmentation result and higher classification accuracy.

78.An Efficient Modified Structure Of CDF 9/7 Wavelet Based On Adaptive Lifting Witß Spißt For Lossy To Lossless Image Compression

We present a modified structure of 2-D CDF 9/7 wavelet transforms based on adaptive lifting in image coding. Instead of alternately applying horizontal and vertical lifting, as in present practice, Adaptive lifting performs lifting-based prediction in local windows in the direction of high pixel correlation. Hence, it adapts far better to the image orientation features in local windows. The predicting and updating signals of Adaptive lifting can be derived even at the fractional pixel precision level to achieve high resolution, while still maintaining perfect reconstruction.

To enhance the performance of adaptive based modified structure of 2-D CDF 9/7 is coupled with SPIHT coding algorithm to improve the drawbacks of wavelet transform. Experimental results shows that the proposed scaling coefficients of modified structure based on adaptive lifting for image coding technique outperforms JPEG 2000 in both PSNR and visual quality, with the improvement up to 6.0 dB than existing structure on images with rich orientation features.

79.Missing Texture Reconstruction Method Based on Error Reduction Algorithm Using Fourier Transform Magnitude Estimation Scheme

A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm.

Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

80.Security Attacks on the Wavele1t Transform and Singular Value Decomposition Image Watermarking

Two vulnerable attacks on the wavelet transform (WT) and Singular Value Decomposition (SVD) based image watermarking scheme are presented in this paper. The WT-SVDbased watermarking is robust against various common image manipulations and geometrical attacks; however, it cannot resist against two security attacks, i.e. an attacker attack which successfully claims the real owner’s watermarked image, and owner attack which correctly extracts watermark from any arbitrary image. As proved in this study, the SVD watermarking scheme cannot provide trustworthy evidence in rightful ownership protection. In addition, the robustness of the SVD watermarking scheme in is a result of improper algorithm design.

81.Occlusion Handling via Random Subspace Classifiers for Human Detection

This paper describes a general method to address partial occlusions for human detection in still images. The random subspace method (RSM) is chosen for building a classifier ensemble robust against partial occlusions. The component classifiers are chosen on the basis of their individual and combined performance. The main contribution of this work lies in our approach’s capability to improve the detection rate when partial occlusions are present without compromising the detection performance on non occluded data.

In contrast to many recent approaches, we propose a method which does not require manual labeling of body parts, defining any semantic spatial components, or using additional data coming from motion or stereo. Moreover, the method can be easily extended to other object classes.The experiments are performed on three large datasets: the INRIA person dataset, the Daimler Multicue dataset, and a new challenging dataset, called Poble Sec, in which a considerable number of targets are partially occluded.

The different approaches are evaluated at the classification and detection levels for both partially occluded and non-occluded data. The experimental results show that our detector outperforms state-of-the-art approaches in the presence of partial occlusions, while offering performance and reliability similar to those of the holistic approach on non-occluded data. The datasets used in our experiments have been made publicly available for benchmarking purposes.

82.Colorization-Based Compression Using Optimization

In this paper, we formulate the colorization-based coding problem into an optimization problem, i.e., an L1 minimization problem. In colorization-based coding, the encoder chooses a few representative pixels (RP) for which the chrominance values and the positions are sent to the decoder, whereas in the decoder, the chrominance values for all the pixels are reconstructed by colorization methods. The main issue in colorization-based coding is how to extract the RP well therefore the compression rate and the quality of the reconstructed color image becomes good.

By formulating the colorization-based coding into an L1 minimization problem, it is guaranteed that, given the colorization matrix, the chosen set of RP becomes the optimal set in the sense that it minimizes the error between the original and the reconstructed color image. In other words, for a fixed error value and a given colorization matrix, the chosen set of RP is the smallest set possible. We also propose a method to construct the colorization matrix that colorizes the image in a multiscale manner. This, combined with the proposed RP extraction method, allows us to choose a very small set of RP. It is shown experimentally.

83.Texture Enhanced Histogram Equalization Using TV-L1 Image Decomposition

Histogram transformation defines a class of image processing operations that are widely applied in the implementation of data normalization algorithms. In this paper, we present a new variational approach for image enhancement that is constructed to alleviate the intensity saturation effects that are introduced by standard contrast enhancement (CE) methods based on histogram equalization. In this paper, we initially apply total variation (TV) minimization with a L1 fidelity term to decompose the input image with respect to cartoon and texture components.

Contrary to previous papers that rely solely on the information encompassed in the distribution of the intensity information, in this paper, the texture information is also employed to emphasize the contribution of the local textural features in the CE process. This is achieved by implementing a nonlinear histogram warping CE strategy that is able to maximize the information content in the transformed image. Our experimental study addresses the CE of a wide variety of image data and comparative evaluations are provided to illustrate that our method produces better results than conventional CE strategies.

84.Fusion of Multifocus Images to Maximize Image Information

When an image of a 3-D scene is captured, only scene parts at the focus plane appear sharp. Scene parts in front of or behind the focus plane appear blurred. In order to create an image where all scene parts appear sharp, it is necessary to capture images of the scene at different focus levels and fuse the images. In this paper, first registration of multifocus images is discussed and then an algorithm to fuse the registered images is described.

The algorithm divides the image domain into uniform blocks and for each block identifies the image with the highest contrast. The images selected in this manner are then locally blended to create an image that has overall maximum contrast. Examples demonstrating registration and fusion of multifocus images are given and discussed.

85.Inception of Hybrid Wavelet Transform using Two Orthogonal Transforms and It’s use for Image Compression

The paper presents the novel hybrid wavelet transform generation technique using two orthogonal transforms. The orthogonal transforms are used for analysis of global properties of the data into frequency domain. For studying the local properties of the signal, the concept of wavelet transform is introduced, where the mother wavelet function gives the global properties of the signal and wavelet basis functions which are compressed versions of mother wavelet are used to study the local properties of the signal.

In wavelets of some orthogonal transforms the global characteristics of the data are hauled out better and some orthogonal transforms might give the local characteristics in better way. The idea of hybrid wavelet transform comes in to picture in view of combining the traits of two different orthogonal transform wavelets to exploit the strengths of both the transform wavelets.

86.A New DCT-based Multiresolution Method for Simultaneous Denoising and Fusion of SAR Images

Individual multiresolution techniques for separate image fusion and denoising have been widely researched. We propose a novel multiresolution Discrete Cosine Transform based method for simultaneous image denoising and fusion, demonstrating its efficacy with respect to Discrete Wavelet Transform and Dual- tree complex Wavelet Transform.

We incorporate the Laplacian pyramid transform multiresolution analysis and a sliding window Discrete Cosine Transform for simultaneous denoising and fusion of the multiresolution coefficients. The impact of image denoising on the results of fusion is demonstrated and advantages of simultaneous denoising and fusion for SAR images are also presented.

87.Brain Segmentation using Fuzzy C means clustering to detect tumour Region

Tumor Segmentation from MRI data is an important but time consuming manual task performed by medical experts. The research which addresses the diseases of the brain in the field of the vision by computer is one of the challenges in recent times in medicine, the engineers and researchers recently launched challenges to carryout innovations of technology pointed in imagery.

This paper focuses on a new algorithm for brain segmentation of MRI images by fuzzy C means algorithm to diagnose accurately the region of cancer. In the first step it proceeds by nioise filtering later applying FCM algorithm to segment only tumor area. In this research multiple MRI images of brain can be applied detection of glioma (tumor) growth by advanced diameter technique.

88.Efficient image compression technique using full, column and row transforms on colour image

This paper presents image compression technique based on column transform, row transform and full transform of an image. Different transforms like, DFT, DCT, Walsh, Haar, DST, Kekre’s Transform and Slant transform are applied on colour images of size 256x256x8 by separating R, G, and B colour planes. These transforms are applied in three different ways namely: column, row and full transform. From each transformed image, specific number of low energy coefficients is eliminated and compressed images are reconstructed by applying inverse transform.

Root Mean Square Error (RMSE) between original image and compressed image is calculated in each case. From the implementation of proposed technique it has been observed that, RMSE values and visual quality of images obtained by column transform are closer to RMSE values given by full transform of images. Row transform gives quite high RMSE values as compared to column and full transform at higher compression ratio. Aim of the proposed technique is to achieve compression with acceptable image quality and lesser computations by using column transform.

89.Grading of rice grains by image processing

The purpose of this paper is grading of rice grains by image processing technique. Commercially the grading of rice is done according to the size of the grain kernel (full, half or broken). The food grain types and their quality are rapidly assessed through visual inspection by human inspectors. The decision making capabilities of human-inspectors are subjected to external influences such as fatigue, vengeance, bias etc. with the help of image processing we can overcome that. By image processing we can also identify any broken grains mixed . Here we discuss the various procedures used to obtain the percentage quality of rice grains.

90.Multi layer information hiding -a blend of steganography and visual cryptography

This study combines the notion of both steganography [1] and visual cryptography [2]. Recently, a number of innovative algorithms have been proposed in the fields of steganography and visual cryptography with the goals of improving security, reliability, and efficiency; because there will be always new kinds of threats in the field of information hiding. Actually Steganography and visual cryptography are two sides of a coin.

Visual cryptography has the problem of revealing the existence of the hidden data where as Steganography hides the existence of hidden data. Here this study is to suggest multiple layers of encryption by hiding the hidden data. Hiding the hidden data means, first encrypting the information using visual cryptography and then hide the share/s[3] into images or audio files using steganography. The proposed system can be less of draw backs and can resist towards attacks.

91.Quality Evaluation of Rice Grains Using Morphological Methods

In this paper we present an automatic evaluation method for the determination of the quality of milled rice. Among the milled rice samples the quantity of broken kernels are determined with the help of shape descriptors, and geometric features. Grains are said to be broken kernels whose lengths are75% of the grain size. This proposed method gives good results in evaluation of rice quality.

Looking for Image Processing projects source code?

Connect with our experts

Shape Image One
Shape Image One
Shape Image One
Shape Image One
Shape Image One
Shape Image One
Shape Image One
Shape Image One
Shape Image One
Shape Image One