4th International Conference on Computer Science, Engineering and Applications (CSEA-2018)

April 28~29, 2018Dubai, UAE

Accepted Papers

    Kamel Benachenhou1,Mhamed. Hamadouche2,Lin Zhiwen1,Zhu Jing1,1Aeronautics and space study institute, Blida.2Boumerdes university, LIMOSE laboratory, Boumerdes, Algeria..

    This paper deals with the analysis of the acquisition process performed by a global navigation satellite system - GNSS - receiver. Signal acquisition decides the presence or absence of GNSS signal by comparing signal under test with a fixed threshold and provides a code delay and Doppler frequency estimation, but in low signal conditions or in a noisy environment; acquisition systems are vulnerable and can give a high false alarm and low detection probability. We introduce a cell- averaging-constant false alarm rate -CFAR- to deal with these situations. In this context, we use a new mathematical derivation to develop closed-form analytic expressions for the false alarm probability. The performances of the proposed detector are evaluated and numerical results via Mont Carlo simulations are presented. At a second level, the proposed scheme is implemented on FPGA.

  • Interpolation and Modeling of Multidimensional Data with Applications
    Dariusz Jacek Jakóbczak,Department of Electronics and Computer Science, Koszalin University of Technology,Sniadeckich 2, 75-453 Koszalin, Poland.

    Probabilistic Features Combination method (PFC), which is proposed by the author, is the approach of multi-dimensional data modeling, extrapolation and interpolation using the set of high-dimensional feature vectors. This method is a hybridization of numerical methods and probabilistic methods. Identification of faces or fingerprints need modeling and each model of the pattern is built by a choice of multi-dimensional probability distribution function and feature combination. PFC modeling via nodes combination and parameter γ as N-dimensional probability distribution function enables data interpolation for feature vectors. Multi-dimensional data is modeled and interpolated via nodes combination and different functions as probability distribution functions for eachfeature treated as random variable.

    Long Ziye1,Wang Peng2,Lin Zhiwen1,Zhu Jing1,1Department of Navy Research Academy,Beijing102249,2Institute of Computing Technology of Chinese Academy of Science,Beijing 100080.

    Voice recognition is now well-established in a quiet, stable environment, as evidenced by the advent of various intelligent voice robots. However, the surrounding environment is a little bit complicated, and the existence of interference signals can cause the efficiency of audio speech recognition, which is far from satisfactory to us. The speech separation and denoising problem has become a key problem nowadays, especially in non-stationary noise and single in the case of vocal tract, speech separation still faces a huge challenge. The problem of cocktail party is also a typical problem of speech separation and denoising. Speech recognition can be applied to all aspects of chatting robots and intelligent question answering system. In recent years, due to the rise of deep learning, speech separation technology based on deep learning has received more and more attention and attention. It has revealed a rather bright application prospect and has gradually become a speech separation in a new research trend. The purpose of this paper is to study the key technologies of speech processing and to summarize and prospect the single speech separation techniques.

    Long Ziye1,Wang Peng2,Lin Zhiwen1,Zhu Jing1,1Department of Navy Research Academy,Beijing102249,2Institute of Computing Technology of Chinese Academy of Science,Beijing 100080

    With the rapid development of artificial intelligence and the development of intelligent hardware and equipment, the data has gradually changed from a single text mode to a diversified data. Diversified data is not only large in size but also diverse in scope, so it is not easy to conduct statistical analysis. With the development of artificial intelligence and the Internet of Things, these challenges will be solved. To address these issues, carry out studies on the system of maritime situational intelligence analysis and early warning. The design of marine situation intelligence analysis and early warning system based on artificial intelligence is proposed and the key technologies are analyzed. The application of artificial intelligence technology in maritime situational intelligence analysis and early warning not only can improve the efficiency of civil actions such as rescue at sea, disaster relief, but also can greatly enhance maritime military competitiveness and ensure the safety of maritime areas.

  • Social media analytics for sentiment analysis and event detection in smart cities
    Aysha Al Nuaimi, Aysha Al Shamsi and Amna Al Shamsi, Elarbi Badidi,1College of Information Technology, United Arab Emirates University


    Smart cities utilize Internet of Things (IoT) devices and sensors to enhance the quality of the city services including energy, transportation, health, and much more. They generate massive volumes of structured and unstructured data on a daily basis. Also, social networks, such as Twitter, Facebook, and Google+, are becoming a new source of real-time information in smart cities. Social network users are acting as social sensors. These datasets so large and complex are difficult to manage with conventional data management tools and methods. To become valuable, this massive amount of data, known as 'big data,' needs to be processed and comprehended to hold the promise of supporting a broad range of urban and smart cities functions, including among others transportation, water, and energy consumption, pollution surveillance, and smart city governance. In this work, we investigate how social media analytics help to analyze smart city data collected from various social media sources, such as Twitter and Facebook, to detect various events taking place in a smart city and identify the importance of events and concerns of citizens regarding some events. A case scenario analyses the opinions of users concerning the traffic in three largest cities in the UAE.

    1Shaukat A li Shahee, 2Usha A nanthakumar,.1,2Shailesh J. Mehta School of Management, Indian Institute of Technology Bombay,Mumbai, India .

    In many applications of data mining, class imbalance is noticed when examples in one class areoverrepresented. Traditional classifiers result in poor accuracy of the minority class due to the classimbalance. Further, the presence of within class imbalance where classes are composed of multiple sub-concepts with different number of examples also affect the performance of classifier. In this paper, wepropose an oversampling technique that handles between class and within class imbalancesimultaneously and also takes into consideration the generalization ability in data space. The proposedmethod is based on two steps- performing Model Based Clustering with respect to classes to identify thesub-concepts; and then computing the separating hyperplane based on equal posterior probabilitybetween the classes. The proposed method is tested on 10 publicly available data sets and the resultshows that the proposed method is statistically superior to other existing oversampling methods.

    1Gi-Chul Y a ng, 2Jung-ran Park,.1Department of Multimedia Engineering, Mokpo National University, Mokpo, KOREA. 2Colledge of Computing & Information, Drexel University , USA.


    This study introduced a mechanism for automatic metadata extraction called ExMETA which has thepotential to alleviate issues dealing with inconsistent metadata application and interoperability acrossdigital collections. ExMETA is capable of analyzing natural language sentences and of generatingdescriptive and structural metadata. ExMETA uses a conceptual graph, one of the formal languages thatrepresent the meanings of natural language, is utilized for developing a mechanism for automaticmetadata generation for digital repositories and collections. The conceptual graph is utilized todisambiguate semantic ambiguities caused by isolation of a metadata element and its correspondingdefinition from the relevant context.

  • Descriptive mining to discover customers pro?le aiming to direct waterconsumption campaigns.
    1Carlos Eduardo Machado Pires, 2Marcelo Ladeira


    In a water supply crisis situation, emergency andstrategic actions should be adopted to minimize impacts tothe population. Data Mining in the Electric Power sector isa common practice because mining studies based on customerconsumption data have the potential to reveal characteristicconsumption pro?les within a diverse and heterogeneous groupwith very different habits and characteristics [1]. Similarly,consumption data mining can be developed in a watter supplysector to identify consumption patterns and consumers pro?lesaiming to support awareness campaigns directed to each pro?le,as well as to allow the adoption of effective operationalactions. Therefore, the objective of the present work is toidentify interesting and unknown consumption patterns thatallow focusing efforts to certify that the real consumption iscompatible with the declared activity, allowing direct rationalwater use campaings, operational actions and frauds combat,what in?uence the consumption reduction and increase recipe.

  • Validation Method Of Fuzzy Association Rules Based On Fuzzy Formal Concept Analysis And Structural Equation Model
    1Imen Mguiris, 2Hamida Amdouni, 3Mohamed Mohsen Gammoudi , 1Computer Science Department, FST-University of Tunis ElManar, Tunis, Tunisia, 2ESEN, University of Manouba, Manouba, Tunisia, 3ISSAM, University of Manouba, Manouba, Tunisia.

    In order to treat and analyze real datasets, fuzzy association rules have been proposed. Several algorithms have been introduced to extract these rules. However, these algorithms suffer from the problems of utility, redundancy and large number of extracted fuzzy association rules. The expert will then be confronted with this huge amount of fuzzy association rules. The task of validation becomes fastidious. In order to solve these problems, we propose a new validation method. Our method is based on three steps. (i) We extract a generic base of non redundant fuzzy association rules by applying EFAR-PN algorithm based on fuzzy formal concept analysis. (ii) we categorize extracted rules into groups and (iii) we evaluate the relevance of these rules using structural equation model.

  • The Role of Social Capital in Knowledge Sharing in Higher Education Institutes
    Abdulqadir Diriye, Jubail University College,Saudi Arabia.

    A majority of the activities performed in higher education institutions are routines that need to be learned, remembered and refined for improvement.  These include academic and administrative tasks that are central to the proper functioning of the institution.  In addition to this, as any business, higher education institutions need to compete and innovate at a time when their performances are measured in detail by their management, students, governments and other external bodies. Staff members in various roles often master particular routine tasks. Although an institution may rely on these members and others who master a particular activity whenever needed, there is no guarantee that staff members or even teams will stay with the institution. Therefore, it would be necessary to make sure that institutional knowledge does not become synonymous with individual staff members and, therefore, the knowledge is only available only when these individuals are present and absent when they are away. This paper looks into how higher education institutions can enhance their knowledge sharing practices by cultivating social capital among its employees. It employs semi-structured interviews to gauge the attitudes of employees of two institutions in Saudi Arabia. This is complemented by a literature survey looking into how social capital theory is adapted by earlier researchers in the area of knowledge sharing.  The findings indicate that trust, social interactions, participation and rewards have strong influence in knowledge sharing.

  • Classification of Alzheimer using fMRI data and Brain Network
    Rishi Yadav, Ravi Bhushan Mishra ,Computer Science & Engineering, IIT BHU (Varanasi),India.

    Since the mid of 1990s, functional connectivity study using fMRI (fcMRI) has drawn increasing attention of neuroscientists and computer scientists, since it opens a new window to explore functional network of human brain with relatively high resolution. BOLD technique provides almost accurate state of brain. Past researches proves that neuro diseases damage the brain network interaction, protein- protein interaction and gene-gene interaction. A number of neurological research paper also analyse the relationship among damaged part. By computational method especially machine learning technique we can show such classifications. In this paper we used OASIS fMRI dataset effected with Alzheimer's disease and normal patient's dataset. After proper processing the fMRI data we use the processed data to form classifier models using SVM (Support Vector Machine), KNN ( K- nearest neighbour) & Naive Bayes. We also compare the accuracy of our proposed method with exciting methods. In future, we will other combinations of methods for better accuracy.

  • Neural and Symbolic Arabic Paraphrasing with Automatic Evaluation
    Fatima Al-Raisi, Abdelwahab Bourai, and Weijian Lin,Carnegie Mellon University,USA.

    We present symbolic and neural approaches for Arabic paraphrasing that yield high paraphrasing accuracy. This is the first work on sentence level paraphrase generation for Arabic and the first using neural models to generate paraphrased sentences for Arabic. We present and compare several methods for para- phrasing and obtaining monolingual parallel data. We share a large coverage phrase dictionary for Arabic and contribute a large parallel monolingual corpus that can be used in developing new seq-to-seq models for paraphrasing. This is the first large monolingual corpus of Arabic. We also present first results in Arabic paraphrasing using seq-to-seq neural methods. Additionally, we propose a novel automatic evaluation metric for paraphrasing that correlates highly with human judgement.

  • Applying Distributional Semantics to Enhance Classifying Emotions in Arabic Tweets
    Shahd Alharbi1 and Dr. Matthew Purver2, King Saud University1,Saudi Arabia,Queen Mary University2,UK.

    Most of the recent researches have been carried out to analyse sentiment and emotions found in English texts, where few studies have been conducted on Arabic contents, which have been focused on analysing the sentiment as positive and negative, instead of the different emotions’ classes. Therefore this paper has focused on analysing different six emotions’ classes in Arabic contents, especially Arabic tweets which have unstructured nature that make it challenging task compared to the formal structured contents found in Arabic journals and books. On the other hand, the recent developments in the distributional sematic models, have encouraged testing the effect of the distributional measures on the classification process, which was not investigated by any other classification-related studies for analysing Arabic texts. As a result, the model has successfully improved the average accuracy to more than 86% using Support Vector Machine (SVM) compared to the different sentiments and emotions studies for classifying Arabic texts through the developed semi-supervised approach which has employed the contextual and the co-occurrence information from a large amount of unlabelled dataset. In addition to the different remarkable achieved results, the model has recorded a high average accuracy, 85.30%, after removing the labels from the unlabelled contextual information which was used in the labelled dataset during the classification process. Moreover, due to the unstructured nature of Twitter contents, a general set of pre-processing techniques for Arabic texts was found which has resulted in increasing the accuracy of the six emotions’ classes to 85.95% while employing the contextual information from the unlabelled dataset.

  • Parts of Speech Tagging for Nepali: Some Linguistic Issues
    Krishna Maya Manger,University of Hyderabad,India..

    Parts of Speech (POS) tagging is a basic necessary tool for Natural Language Processing. As a lowest level of syntactic analysis, it has importance in different NLP applications such as Word Sense Disambiguation, Information Retrieval, Information Processing, Parsing, Question Answering and Machine Translation. This paper aims to identify and analyze the linguistic issues found in Parts of Speech Tagging for Nepali language based on Bureau of Indian Standard (BIS) Tag Set. Linguistic issues are categorized and analyzed broadly under the sub-topics like Ambiguity; Tagging Bound Morphemes; and Tagging Compound Verbs. Under the issues related to ambiguity this paper discusses ambiguity arises in tagging Pronoun and Demonstrative, Spatio-Temporal Noun and Post positions, Infinite verbs and Gerund, Negative particle and Default Particle etc. While under the issue of tagging bound morphemes, it discusses about grammatical attributes of Case Markers, Plural Markers, namyogi and Numeral Classifiers in Nepali. The issue of tagging compound verbs is also discussed in this paper.

  • Automatic Identification of the Annexation Construction in Arabic Text
    Fatima Talib,Sultan Qaboos University,USA.

    This paper presents a method for automatically identifying the Arabic annexation construction in text. This work highlight the importance of correctly identifying this construction due to its special grammatical function and semantics. We present a semantic categorization of the different types of annexation in Arabic and report their distribution in a balanced corpus of contemporary Arabic spanning different genres. Results from rule-based and machine learning approaches are presented and compared. The best performing method identifies the annexation construction with 82% precision. We note that these results are achieved by baseline classifiers suggesting room for potential improvement in the automatic identification of annexation in Arabic text.

  • Social Network Hate Speech Detection for Amharic Language
    Zewdie Mossie and Jenq-Haur Wang,National Taipei University of Technology,Taiwan.

    The anonymity of social networks makes it attractive for hate speech to mask their criminal activities online posing a challenge to the world and in particular Ethiopia. With this ever-increasing volume of social media data, hate speech identification becomes a challenge in aggravating conflict between citizens of nations. The high rate of production, has become difficult to collect, store and analyze such big data using traditional detection methods. This paper proposed the application of apache spark in hate speech detection to reduce the challenges. We developed an apache spark based model to classify Amharic Facebook posts and comments into hate and not hate. We employed Random forest and Naïve Bayes for learning and Word2Vec and TF-IDF for feature selection. Tested by 10-fold cross-validation, the model based on word2vec embedding performed best with 79.83% accuracy. The proposed method achieve a promising result with unique feature of spark for big data.

  • Feasibility Study of an Image Processing System on Board Earth Observation Satellite
    Bendouda Malika and Bentoutou youcef,Oran University,Algeria .

    Automatic image registration on board earth observation satellites is the most important pre-processing requirement for on board change detection task. The performance of this task depends on the robustness and the accuracy of the implemented image registration system. On-board change detection systems aim to decrease the amount of data transmitted to the ground station by sending only the part of the image containing the identified changes, referred to as change image. The objective of this paper is to investigate the feasibility of implementing an image registration and change detection system on board earth observation satellites of the Disaster Monitoring Constellation (DMC). An automatic image registration and change detection system is proposed and its application to the process of disaster monitoring is outlined. The aim is to provide useful information of the changes in disaster affected areas and to issue early warnings. In this work, a registration method based on Speeded Up Robust Features (SURF) algorithm is applied and compared with some classical image registration methods. The applied change detection method is based on image differencing. Two metrics are calculated to show the high accuracy and robustness of the registration method. The performance of the proposed registration and change detection system is evaluated using real satellite images of the first Algerian DMC Satellite Alsat-1. The results are scaled and compared to existing hardware (the solid state data recorder) used onboard Alsat-1.


  • How Can A Network Distribute Video programs
    1Junyi MEN, 2Xi ngj un WANG, 3Li MA.1,2,3Tsinghua University Beijing, China .

    The network traffic, especially the video streaming traffic, has been increasing dramatically in recent years and this trend is deemed to continue for the foreseeable future [1]. Video-On-Demand (VOD) is a promising technology for many important applications, and has become more and more popular among users. However, it suffers from several key problems: the network congestion, the increasing latency and the large amount of investment. A mature technology, Content Delivery Network (CND), with the benefits of reduced bandwidth consumption, reduced network congestion, and low network request delay, seems to be a perfect solution to this problem. However, the benefit is not significant compared with the increased investment and energy consumption of the network. A feasible solution to solve this problem is periodic broadcasting. The periodic broadcasting method allows the various receivers to share the same periodic broadcast channels, and therefore enjoy the same video with little bandwidth consumption. In this paper, based on the principle of periodic broadcasting, we propose a distribution scheme named OCPB to distribute video programs. In the OCPB scheme, the total network traffic increases very slowly (approximately logarithmically) with the increase of the number of clients. As the larger number of clients is, the more bandwidth we have saved.

  • Patent Considerations in IoT Networks
    1Irene Kafeza, 2Eleanna Kafeza , 1Advocate, Kafeza Law Office, Greece,2Eleanna KAfeza, Assistant Professor, Zayed University, Abu Dhabi.

    In Internet of things (IoT) ecosystem a variety of devices communicate and connect to each other over a data network creating applications with profound lives. These interconnected applications create also an interconnection between the technology, security and legalities. This interconnection that comes along with the evolving and beneficial technology of Internet of things and its applications secure the balance between technology and its regulation. Particularly, in IoT ecosystem there are challenges related to standards and unified solutions that incorporate different networks and devices , using wirelss computing. loud computing and edge computing. At the same time IoT applications and devices have obtained intellectual property protection and particularly patent protection but the existing patent law was not designed to cover new technologies like IoT. The territorial nature of patent law conflicts with the issues raised in IoT applications since it requires territorial laws to regulate patent disputes on unfamiliar distributed environments usually operating in multiple jurisdictions. When it comes to network technologies because of their potential global use, there is a practice where patent holders wait their invention to be included in a network standard which cannot be adopted with these inventions and when the standard is widely adopted they require royalties. The Organizations that develop the standards, Standard Setting Organizations, in order to address these issues have developed policies which aim to reach agreements with patents owners who hold patents essential for the implementation of the standards. These agreements calculate the royalty fees on a reasonable and non discriminatory(RAND) basis. However, it is often that patent holders require royalties after the establishment of the standard on a different royalty base that includes not only the value of the technology per se but in relation to the value of the standard in the market. This situation known as patent hold up creates a lot of problems in Standard Setting Organizations. This problem is even more sever in the case of Network Standards. This paper discusses issues related to patent infringement activity in IoT as well as the problem of patent hold up in the 802.11 standard.

  • Character And Image Recognition For Data Cataloging In Ecological Research
    Shannon Heh Junior, Lynbrook High SchoolSan Jose, California, USA

    Data collection is an essential, but manpower intensive procedure in ecological research. I developed an algorithm which incorporated two important computer vision techniques to automate data cataloging for butterfly measurement and cataloguing. Optical Character Recognition is used for character recognition and Contour Detection is used for image-processing. Proper pre-processing is first done on the images to improve accuracy. Although there are limitations to Tesseract's detection of certain fonts, overall, it can successfully identify words of basic fonts. Contour detection is an advanced technique that can be utilized to measure an image. Shapes and mathematical calculations are crucial in determining the precise location of the points on which to draw the body and forewing lines of the butterfly. Overall, 92.4% accuracy were achieved by my program for the set of butterflies measured.

  • Practical Training Navigator
    Sahar Bayoumi, Alhanouf Alsubaie, Hanouf Almegren, Jawaher Aljabr, Najla Alshaya, Sara Alhamad Information Technology Department, CCIS, KSU, Riyadh, Saudi Arabia

    Practical training is an integral part of the educational journey. Where it is necessary to apply, what have been learned. Moreover, it is important to refine and develop the communication skills of the student in the work environment. Through practical training, the student receives the required efficiency and experience in order to be able to cope with the requirements of the job to be aspired. The importance of practical training is that it aims at the highest degree of compatibility between what the student studies in the field of specialization and what is required and used in the actual work environment. According to that, understanding the need for an application aimed at creating a link between the various parties involved in practical training (Employer, University training committee, Student). In general, the Practical training navigator (PTN) system is designed to reduce the time and effort required to find, coordinate and communicate with the organizations. Moreover, it helps to provide multiple opportunities for the student and thus allows student to choose what is compatible with his desire.

© Copyright - CSEA 2018