Position: Graduate Research Assistant at UniMAP in the area of Artificial Intelligence (AI) Title: New Ensemble Classifier Algorithm Incorporating Deep Learning Dropout Approach for Small Sample-Sized Problems
Terms and Conditions: 1) Need to register for full-time MSc or PhD at UniMAP 2) PhD Candidate: MSc in Mechatronic/Electrical/Computer Engineering or equivalent 3) MSc Candidate: Bachelor Degree in Mechatronic/Electrical/Computer Engineering or equivalent with CGPA above 2.75 4) Having a good and positive attitude, strong self-discipline and can work either in teamwork or independently 5) Programming Skills: Matlab and Python 6) Knowledge: Machine Learning, Deep Learning, Embedded Systems (Raspberry PI)
**Graduate Research Assistant (GRA) allowances available (RM1500 – RM1800 per month)
Publish or Perish is designed to help individual academics to present their case for research impact to its best advantage, even if you have very few citations. You can also use it to decide which journals to submit to, to prepare for a job interview, to do a literature review, to do bibliometric research, to write laudatios or obituaries, or to do some homework before meeting your academic hero. Publish or Perish is a real Swiss army knife.
To use search with Scopus, please read this link: Read Here!
Example using this software:
Part 1: Open the software
Part 2: Identify your related title . This example, I want to search ‘pathological voices‘. Then, choose your publication years and click ‘search‘. Then you will be found around 193 of document/article/proceeding publish in Scopus.
Part 3: Normally I will do a literature part by part by grouping the literature. So, here I will add in ‘Keyword‘ to filter from the 193 of document/article/proceeding. Example here, I want to filter the feature extraction using ONLY time domain. Then, click search again. Now, we can see only 9 document/article/proceeding related to thetime domain.
So in conclusion, by using this software we can managed our literature review either to write a thesis or journal/conference papers.
Device-free localization (DFL) has become a hot topic in the paradigm of the Internet of Things. Traditional localization methods are focused on locating users with attached wearable devices. This involves privacy concerns and physical discomfort especially to users that need to wear and activate those devices daily. DFL makes use of the received signal strength indicator (RSSI) to characterize the user’s location based on their influence on wireless signals. Existing work utilizes statistical features extracted from wireless signals. However, some features may not perform well in different environments. They need to be manually designed for a specific application. Thus, data processing is an important step towards producing robust input data for the classification process. This paper presents experimental procedures using the deep learning approach to automatically learn discriminative features and classify the user’s location. Extensive experiments performed in an indoor laboratory environment demonstrate that the approach can achieve 84.2% accuracy compared to the other basic machine learning algorithms.
Keywords: device-free localization; machine learning classifier; deep learning; big data; wireless networks; classification; received signal strength
Under certain situations, researchers were forced to work with small sample-sized (SSS) data. With very limited sample size, SSS data have the tendency to undertrain a machine learning algorithm and rendered it ineffective. Some extreme cases in SSS problems will have to deal with large feature-to-instance ratio, where the high number of features compared to small number of instances will overfit the classification algorithm. This paper intends to solve small sample-sized classification problems through hybrid of random subspace method and random linear oracle ensemble by utilizing binary feature subspace splitting and oracle selection scheme. Experimental results on artificial data indicate the proposed algorithm can outperform single decision tree and linear discriminant classifiers in small sample-sized data, but its performance is identical to k-nearest neighbor classifier due to both shared similar selection approach. Results from real-world medical data indicate the proposed method has better classification performance than its corresponding single base classifier especially in the case of decision tree.
Keywords: Ensemble method, classification, small sample, Euclidian’s distance
ThingSpeak™ is a free web service that lets you collect and store sensor data in the cloud and develop Internet of Things applications. The ThingSpeak web service provides apps that let you analyze and visualize your data in MATLAB®, and then act on the data. Sensor data can be sent to ThingSpeak from Arduino®, Raspberry Pi™, BeagleBone Black, and other hardware.
According to a recent study, machine learning algorithms are expected to replace 25% of the jobs across the world, in the next 10 years. With the rapid growth of big data and availability of programming tools like Python and R –machine learning is gaining mainstream presence for data scientists. Machine learning applications are highly automated and self-modifying which continue to improve over time with minimal human intervention as they learn with more data.
Abstract – Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include error-correcting output coding, Bagging, and boosting. This paper reviews these methods and explains why ensembles can often perform better than any single classifier. Some previous studies comparing ensemble methods are reviewed, and some new experiments are presented to uncover the reasons that Adaboost does not overfit rapidly.