Show simple item record

dc.contributor.advisorOlivares Poggi, Cesar Augusto
dc.contributor.authorHuiza Pereyra, Eric Raphael
dc.description.abstractPeople with deafness or hearing disabilities who aim to use computer based systems rely on state-of-art video classification and human action recognition techniques that combine traditional movement pat-tern recognition and deep learning techniques. In this work we present a pipeline for semi-automatic video annotation applied to a non-annotated Peru-vian Signs Language (PSL) corpus along with a novel method for a progressive detection of PSL elements (nSDm). We produced a set of video annotations in-dicating signs appearances for a small set of nouns and numbers along with a labeled PSL dataset (PSL dataset). A model obtained after ensemble a 2D CNN trained with movement patterns extracted from the PSL dataset using Lucas Kanade Opticalflow, and a RNN with LSTM cells trained with raw RGB frames extracted from the PSL dataset reporting state-of-art results over the PSL dataset on signs classification tasks in terms of AUC, Precision and Recall.es_ES
dc.description.uriTrabajo de investigaciónes_ES
dc.publisherPontificia Universidad Católica del Perúes_ES
dc.rightsAtribución 2.5 Perú*
dc.subjectRedes neuronales (Computación)es_ES
dc.subjectAlgoritmos computacionaleses_ES
dc.subjectReconocimiento óptico de patroneses_ES
dc.titleTalking with signs: a simple method to detect nouns and numbers in a non annotated signs language corpuses_ES
dc.typeinfo:eu-repo/semantics/masterThesises_ESíster en Informáticaes_ESíaes_ES Universidad Católica del Perú. Escuela de Posgradoes_ESáticaes_ES

Files in this item


This item appears in the following Collection(s)

Show simple item record

Atribución 2.5 Perú
Except where otherwise noted, this item's license is described as Atribución 2.5 Perú