Amjad Altadmri – PhD

Amjad Altadmri has passed his PhD viva, subject to minor amendments, earlier today.

Thesis Title:  “Semantic Video Annotation in Domain-Independent Videos Utilising Similarity and Commonsense Knowledgebases

Thanks to the external, Dr John Wood from the University of Essex, the internal Dr Bashir Al-Diri and the viva chair, Dr Kun Guo.

Congratulations and Well done.

All colleagues are invited to join Amjad on celebrating his achievement, tomorrow (Thursday 28th Feb) at 12:00noon, in our meeting room MC3108, with some drinks and light refreshments available.

Best wishes.

 

February PGR Research Presentations

The PGRs Research Presentations series has started on Wed. 13th Feb, 1pm, Meeting Room, MC3108 (3rd floor).

In each session we expect two PGR presentations. This session we had the following presentations:

 

Title: “A probabilistic approach   to Correctly and Automatically form of Retinal Vasculature“.

Title:   “Semantic Video Analysis: from Camera Language to Human Language

By: Touseef Qureshi

By: Amjad Altadmri

Abstract: 

Correct configuration and formation of   retinal vasculature is a vital step towards the diagnoses of these   cardiovascular diseases. A single minor mistake during the process of   connecting broken segments of vessels can lead to a completely incorrect   vasculature. Image processing techniques can’t alone solve this problem. On   the other hand, we are working on multidimensional scientific approach that   integrates Artificial intelligence, image process techniques, statistics and   probability. We are working and expecting an optimal approach towards the   correct configuration of broken vessels segments at junctions, bridges, and   terminals.

Abstract 

The   rapidly increasing volume of visual data, available online or via   broadcasting, emphasizes the need towards building intelligent tools for   indexing, searching, rating, and retrieval. Textual semantic representations,   such as tagging, labeling and annotation, are often important parts of   videos’ indexing process, due to the advances in text analysis and their   intuitive user-friendly nature for representing semantics suitable for search   and retrieval.

 

Ideally,   this annotation should simulate the human cognitive way of perceiving and   describing videos. While these digital video mediums contain low-level visual   data, human beings have the ability to infer more meaningful information from   videos. The difference between these low-level contents and its corresponding   human perception is referred to as the “semantic gap”. This gap is even   harder to be handled in domain-independent uncontrolled videos, mainly due to   the lack of any previous information about the analyzed video on one side,   and the huge generic knowledge needed to be available on the other.