Procesamiento de Señales e Imágenes Digitales.
Permanent URI for this collectionhttp://98.81.228.127/handle/20.500.12404/5040
Browse
5 results
Search Results
Item Optimal vicinity 2D median filter for fixed-point or floating-point values(Pontificia Universidad Católica del Perú, 2024-06-19) Chang Fu, Javier; Carranza De La Cruz, Cesar AlbertoLos filtros medianos son una técnica digital no lineal normalmente usada para remover ruido blanco, ’sal y pimienta’ de imágenes digitales. Consiste en reemplazar el valor de cada pixel por la mediana de los valores circundantes. Las implementaciones en punto flotante usan ordenamientos con técnicas de comparación para encontrar la mediana. Un método trivial de ordenar n elementos tiene una complejidad de O(n2), y los ordenamientos más rápidos tienen complejidad de O(n log n) al calcular la mediana de n elementos. Sin embargo, éstos algoritmos suelen tener fuerte divergencia en su ejecución. Otras implementaciones usan algoritmos basados en histogramas, y obtienen sus mejores desempeños cuando operan con filtros de ventanas grandes. Estos algoritmos pueden alcanzar tiempo constante al evaluar filtros medianos, es decir, presenta una complejidad de O(1). El presente trabajo propone un algoritmo de filtro mediano rápido y altamente paralelizable. Se basa en ordenamientos sin divergencia con ejecución O(n log2 n) y mezclas O(n) con los cuales se puede calcular grupos de pixeles en paralelo. Este método se beneficia de la redundancia de valores en pixeles próximos y encuentra la vecindad de procesamiento óptima que minimiza el número de operaciones promedio por pixel. El presente trabajo (i) puede procesar indiferentemente imágenes en punto fijo o flotante, (ii) aprovecha al máximo el paralelismo de múltiples arquitecturas, (iii) ha sido implementado en CPU y GPU, (iv) se logra una aceleración respecto al estado del arte.Item Novel Edge-Preserving Filtering Model Based on the Quadratic Envelope of the l0 Gradient Regularization(Pontificia Universidad Católica del Perú, 2023-01-26) Vásquez Ortiz, Eduar Aníbal; Rodríguez Valderrama, Paul AntonioIn image processing, the l0 gradient regularization (l0-grad) is an inverse problem which penalizes the l0 norm of the reconstructed image’s gradient. Current state-of-the art algorithms for solving this problem are based on the alternating direction method of multipliers (ADMM). l0-grad however, reconstructs images poorly in cases where the noise level is large, giving images with plain regions and abrupt changes between them, that look very distorted. This happens because it prioritizes keeping the main edges but risks losing important details when the images are too noisy. Furthermore, since kÑuk0 is a non-continuous and non-convex regularizer, l0-grad can not be directly solved by methods like the accelerated proximal gradient (APG). This thesis presents a novel edge-preserving filtering model (Ql0-grad) that uses a relaxed form of the quadratic envelope of the l0 norm of the gradient. This enables us to control the level of details that can be lost during denoising and deblurring. The Ql0-grad model can be seen as a mixture of the Total Variation and l0-grad models. The results for the denoising and deblurring problems show that our model sharpens major edges while strongly attenuating textures. When it was compared to the l0-grad model, it reconstructed images with flat, texture-free regions that had smooth changes between them, even for scenarios where the input image was corrupted with a large amount of noise. Furthermore the averages of the differences between the obtained metrics with Ql0- grad and l0-grad were +0.96 dB SNR (signal to noise ratio), +0.96 dB PSNR (peak signal to noise ratio) and +0.03 SSIM (structural similarity index measure). An early version of the model was presented in the paper Fast gradient-based algorithm for a quadratic envelope relaxation of the l0 gradient regularization which was published in the international and indexed conference proceedings of the XXIII Symposium on Image, Signal Processing and Artificial Vision.Item 3D updating of solid models based on local geometrical meshes applied to the reconstruction of ancient monumental structures(Pontificia Universidad Católica del Perú, 2014-10-14) Zvietcovich Zegarra, José Fernando; Castañeda Aphan, Benjamín; Perucchio, RenatoWe introduce a novel methodology for locally updating an existing 3D solid model of a complex monumental structure with the geometric information provided by a 3D mesh (point cloud) extracted from the digital survey of a specific sector of a monument. Solid models are fundamental for engineering analysis and conservation of monumental structures of the cultural heritage. Finite elements analysis (FEA), the most versatile and commonly used tool for the numerical simulation of the static and dynamic response of large structures, requires 3D solids which accurately represent the outside as well as the inside geometry and topology of the domain to be analyzed. However, the structural changes introduced during the lifetime of the monument and the damage caused by anthropogenic and natural factors contribute to producing complex geometrical configurations that may not be generated with the desired accuracy in standard CAD solid modeling software. On the other hand, the development of digital techniques for surveying historical buildings and cultural monuments, such as laser scanning and photogrammetric reconstruction, has made possible the creation of accurate 3D mesh models describing the geometry of those structures for multiple applications in heritage documentation, preservation, and archaeological interpretations. The proposed methodology consists of a series of procedures which utilize image processing, computer vision, and computational geometry algorithms operating on entities defined in the Solid Modeling space and the Mesh space. The operand solid model is defined as the existing solid model to be updated. The 3D mesh model containing new surface information is first aligned to the operand solid model via 3D registration and, subsequently, segmented and converted to a provisional solid model incorporating the features to be added or subtracted. Finally, provisional and operand models are combined and data is transferred through regularized Boolean operations performed in a standard CAD environment. We test the procedure on the Main Platform of the Huaca de la Luna, Trujillo, Peru, one of the most important massive earthen structures of the Moche civilization. Solid models are defined in AutoCAD while 3D meshes are recorded with a Faro Focus laser scanner. The results indicate that the proposed methodology is effective at transferring complex geometrical and topological features from the mesh to the solid modeling space. The methodology preserves, as much as possible, the initial accuracy of meshes on the geometry of the resultant solid model which would be highly difficult and time consuming using manual approaches.Item Multi-scale image inpainting with label selection based on local statistics(Pontificia Universidad Católica del Perú, 2014-09-09) Paredes Zevallos, Daniel Leoncio; Rodríguez Valderrama, Paúl AntonioWe proposed a novel inpainting method where we use a multi-scale approach to speed up the well-known Markov Random Field (MRF) based inpainting method. MRF based inpainting methods are slow when compared with other exemplar-based methods, because its computational complexity is O(jLj2) (L feasible solutions’ labels). Our multi-scale approach seeks to reduces the number of the L (feasible) labels by an appropiate selection of the labels using the information of the previous (low resolution) scale. For the initial label selection we use local statistics; moreover, to compensate the loss of information in low resolution levels we use features related to the original image gradient. Our computational results show that our approach is competitive, in terms reconstruction quality, when compare to the original MRF based inpainting, as well as other exemplarbased inpaiting algorithms, while being at least one order of magnitude faster than the original MRF based inpainting and competitive with exemplar-based inpaiting.Item Automatic regularization parameter selection for the total variation mixed noise image restoration framework(Pontificia Universidad Católica del Perú, 2013-03-27) Rojas Gómez, Renán Alfredo; Rodríguez Valderrama, Paúl AntonioImage restoration consists in recovering a high quality image estimate based only on observations. This is considered an ill-posed inverse problem, which implies non-unique unstable solutions. Regularization methods allow the introduction of constraints in such problems and assure a stable and unique solution. One of these methods is Total Variation, which has been broadly applied in signal processing tasks such as image denoising, image deconvolution, and image inpainting for multiple noise scenarios. Total Variation features a regularization parameter which defines the solution regularization impact, a crucial step towards its high quality level. Therefore, an optimal selection of the regularization parameter is required. Furthermore, while the classic Total Variation applies its constraint to the entire image, there are multiple scenarios in which this approach is not the most adequate. Defining different regularization levels to different image elements benefits such cases. In this work, an optimal regularization parameter selection framework for Total Variation image restoration is proposed. It covers two noise scenarios: Impulse noise and Impulse over Gaussian Additive noise. A broad study of the state of the art, which covers noise estimation algorithms, risk estimation methods, and Total Variation numerical solutions, is included. In order to approach the optimal parameter estimation problem, several adaptations are proposed in order to create a local-fashioned regularization which requires no a-priori information about the noise level. Quality and performance results, which include the work covered in two recently published articles, show the effectivity of the proposed regularization parameter selection and a great improvement over the global regularization framework, which attains a high quality reconstruction comparable with the state of the art algorithms.