Searching is a difficult task as it takes so much time to perform it. If we have a large dataset then if we do one to one searching then it will take so much of user time.

We have Stack Overflow Dataset from Kaggle link : https://www.kaggle.com/c/facebook-recruiting-iii-keyword-extraction/data
So Now we have a task :
SearchEngine_Data.ipynb : In this notebook we are getting our data and removing duplicates. Then we move on to select tags which we want. We used multiprocessing to do so as using 4 cores together increased the speed and did work of 2.5 hrs in 1 hr. We saved the new processed dataframe in Sqlite Database.
PreProcessing.ipynb : In this notebook we are PreProcessing the data in Title i.e our Questions. We are removing any html tags and spaces and other junk or stopwords from it.
SearchEngine_Data.ipynb : In this notebook we are creating system to access the queries, i.e starting step of building our Prediction system. We First Vectorized the whole data and used Pairwise distance between the query and database but the Results were not upto the marks. TFIDF performed better than BOW.
ClassificationMachineLearning.ipynb : As in 3rd step we were not able to get good Results, So what we gonna do is to use some Classic Machine Learning. So What I did is used this data to make a machine learning model. The Title is a string values so we used TFIDFVectorizer ass tfidf performed better than bow in 3rd step. Next step we divided the model into train, cv, test. As we had such a sparse vector we had 2 choices LR or SVM. We performed on both Unigram and Bigram but on bigram it was overfitting. Then we finally used LR with Unigram as its performance was better.
Then After Predicting the Programming language of query then we add that in our query. Cause mostly when we search something on stackoverflow we often add tag with our question.
Then we repeated the steps we did in 3rd Step and our results were far better.